Apr 19

I am now, officially, a Supper Presenter!

I finally had an opportunity to present at SQL Supper event in London on the 18th of April 2017. I have thoroughly enjoyed the event and the pub session that followed. I even managed to learn a few things myself! Thank you very much, James and other organisers for presenting me with this opportunity to share the knowledge and experience with others and contribute to the community. Many thanks for people attending the event and particularly those who participated in the discussions.

The talk I have presented was on BI ALM with TFS 2017 (DevOps for MS BI using TFS2017), please let me know any feedback on the session by commenting to this post, that will be very much appreciated. If you have any questions or if you want any part of the scripts I have used in demo – please let me know.

Apr 29

Deploying Dacpac with Octopus

Here’s a full deployment script for deploying dacpac with Octopus to SQL 2014

# Set params

$databaseName = “AdventureWorksDW”

# Add the DLL

# For 32-bit machines

#Add-Type -path “C:\Program Files\Microsoft SQL Server\120\DAC\bin\Microsoft.SqlServer.Dac.dll”

# For 64-bit machines

Add-Type -path “C:\Program Files (x86)\Microsoft SQL Server\120\DAC\bin\Microsoft.SqlServer.Dac.dll”

# Create the connection strnig

$d = New-Object Microsoft.SqlServer.Dac.DacServices “server=(local)”

#Set Package version such as $PackageVersion = “”

$PackageVersion = $OctopusParameters[‘Octopus.Action[Push nuget package to the server].Package.NuGetPackageVersion’]

#Set Application Directory, such as “C:\Octopus\Applications\Test\AdventureWorksDW\”

$ApplicationPath = $OctopusParameters[‘Octopus.Tentacle.Agent.ApplicationDirectoryPath’]

#Set Environment Name

$EnvName = $OctopusParameters[‘Octopus.Environment.Name’]

#Set PackageName

$PackageName = $databaseName

$dacpac = $ApplicationPath + “\” + $EnvName + “\” + $PackageName + “\” + $PackageVersion + “\” + $databaseName + “.dacpac”

Write-Host $dacpac

# Load dacpac from file & deploy to database

$dp = [Microsoft.SqlServer.Dac.DacPackage]::Load($dacpac)

# Set the DacDeployOptions

$options = New-Object Microsoft.SqlServer.Dac.DacDeployOptions -Property @{

‘BlockOnPossibleDataLoss’ = $true;

‘DropObjectsNotInSource’ = $false;

‘ScriptDatabaseOptions’ = $true;

‘IgnorePermissions’ = $true;

‘IgnoreRoleMembership’ = $true


# Generate the deplopyment script

$deployScriptName = $databaseName + “.sql”

$deployScript = $d.GenerateDeployScript($dp, $databaseName, $options)

# Return the script to the log

Write-Host $deployScript

# Write the script out to a file

$deployScript | Out-File $deployScriptName

# Deploy the dacpac

$d.Deploy($dp, $databaseName, $true, $options)

Mar 24

New Dimension or a New Attribute?

New Dimension?Time and time again I am faced with these dimensional modeling debates : “Do I create a new dimension or just add an attribute to an existing one?”

I believe there is no right and wrong here, only considerations of performance and convenience of use. Performance, however, here is both in terms of the user experience and the time to market, that is development time. I can see an overhead in creating a separate dimension: extra ETL processes, extra cube objects, extra keys in the fact table, extra maintenance required. The bottom line of this decision, in my opinion, is in whether it provides benefits in terms of:
  1. creating a report (makes it easier and more intuitive) or
  2. creating the datamart (makes ETL or MDX calculations more straightforward without compromising the former) or
  3. user-experience benefits (query execution time)

If there is no tangible benefit to any of the above or, worse still, if any of them are compromised as a result of adding a dimension, it cannot be a good idea.

A good reason to create a dimension could be when two facts of different granularity need to be linked together, such as actual revenue and budget/plan/forecast (convenience and functionality). Another good reason is to normalize-out rapidly-changing attributes out of a fat, long and relatively static otherwise monster dimension (user experience and ETL benefit). Another one is to create a junk dimension when there are lots of unrelated low-granularity columns in a fact table (half a dozen of nominal categories with a few unique values that do not belong in any current dimensions)

Ultimately, if it is really a twilight zone and you are still sitting on the fence – just make it an attribute and promote it to a dimension once you will get a clear case for that: reduce your development time and business will love you for that alone 🙂

I hope this may helped someone decide. Please do comment and suggest your own!

Oct 16

Disk and memory-optimized tables in a single query: cross-container transactions

When you set up a database for  memory-optimized tables in SQL Server 2014 it is recommended that you enable SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT setting:


Memory x DiskThis setting is important when using transactions involving in-memory tables and disk-based tables, the so-called “cross-container transactions”. As you may know, SQL Server default transaction isolation level is normally READ COMMITTED. The above setting will elevate the isolation level for memory-optimized tables to SNAPSHOT for every cross-container transaction with lower isolation level (such as default, read committed). This setting is equivalent to including WITH(SNAPSHOT) hints to every DML operation involving memory-optimized table in cross-container transaction.

For more information http://msdn.microsoft.com/en-us/library/dn133175.aspx

If this setting is not specified, you may see the following message returned when attempting a cross-container transaction:

Msg 41368, Level 16, State 0, Line 31
Accessing memory optimized tables using the READ COMMITTED isolation level is supported only for autocommit transactions. It is not supported for explicit or implicit transactions. Provide a supported isolation level for the memory optimized table using a table hint, such as WITH (SNAPSHOT).

So one way of dealing with this message is to use WITH (SNAPSHOT) hint next to every memory-optimised table involved in the DML statement, just as advised by the error message:

SET data = mot.data
FROM MemoryOptimisedTable mot WITH (SNAPSHOT) 
   INNER JOIN DiskTable dt on mot.id = dt.id


However, if you expect to use cross-container transactions often, a more convenient way is to specify a database option SETMEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON

If SERIALIZABLE or REPEATABLE READ transaction isolation level is used, the error message is somewhat different:

Msg 41333, Level 16, State 1, Line 29
The following transactions must access memory optimized tables and natively compiled stored procedures under snapshot isolation: RepeatableRead transactions, Serializable transactions, and transactions that access tables that are not memory optimized in RepeatableRead or Serializable isolation.

Here, the isolation level is higher than SNAPSHOT, so the elevation setting is not going to help, and you must use WITH (SNAPSHOT) hints:


IF OBJECT_ID('dbo.MemoryOptimisedTable') IS NOT NULL DROP TABLE dbo.MemoryOptimisedTable

CREATE TABLE dbo.MemoryOptimisedTable (
im_data int) with (MEMORY_OPTIMIZED = ON)
IF OBJECT_ID('dbo.DiskTable') IS NOT NULL DROP TABLE dbo.DiskTable
CREATE TABLE dbo.DiskTable
dt_data int)
INSERT INTO MemoryOptimisedTable (id, im_data)
values (1,3876),(2,6576),(3,3345)
INSERT INTO DiskTable (id, dt_data)
values (1,45756),(2,1231),(3,3432)
SELECT * FROM MemoryOptimisedTable mot INNER JOIN DiskTable dt on mot.id = dt.id
SET dt_data = mot.im_data
FROM MemoryOptimisedTable mot INNER JOIN DiskTable dt on mot.id = dt.id
DELETE FROM MemoryOptimisedTable

Generic Previous Period MDX Calculation Measure


I had recently had a requirement to create a Calculated Member or Calculation for a generic previous period measure. The user wanted to use the same calculation with a number of different date cube dimensions. For  example, one may want to see previous value of Internet Sales for Adventure works over OrderDate or for Delivery Date using the same calculated member for convenience and clarity. In addition, you want to display something intuitive and user-friendly for when the value is not available (you are looking at the first dimension level member). Brief googling came to no avail, so I decided to blog the solution I created, which I thought may come pretty handy for those not particularly hot on MDX.

The idea is to use coalesceEmpty MDX function:

 MEMBER [Measures].[Previous Internet Sales Amount] AS
 ([Date].[Calendar].PrevMember, [Measures].[Internet Sales Amount]),
 ([Delivery Date].[Calendar].PrevMember, [Measures].[Internet Sales Amount]),
'Not Avaiable'
              ), FORMAT_STRING='Currency'
 {[Measures].[Internet Sales Amount],
 [Measures].[Previous Internet Sales Amount]} ON COLUMNS,
{[Delivery Date].[Calendar].[Calendar Year]} ON ROWS
FROM [Adventure Works]

…which results in the following:

Previous Internet Sales over Delivery Date

Now, the same calculated member can be used with a different date dimension, for here’s an example with Order Date (aka “Date” in Adventure Works cube):

 MEMBER [Measures].[Previous Internet Sales Amount] AS
 ([Date].[Calendar].PrevMember, [Measures].[Internet Sales Amount]),
 ([Delivery Date].[Calendar].PrevMember, [Measures].[Internet Sales Amount]),'Not Avaiable'
, FORMAT_STRING='Currency'
 {[Measures].[Internet Sales Amount],
 [Measures].[Previous Internet Sales Amount]} ON COLUMNS,
{[Date].[Calendar].[Calendar Year]} ON ROWS
FROM [Adventure Works]

…with results in slightly different figures:

Using Order Date (Called "Date" in Adventure works)

Obviously this calculated member is very simple but the idea remains the same for any previous period calculation. Naturally it makes more sense to use it as a calculation rather than a session-scope member:

Using generic previous period MDX as a calculation

Hope this is useful.

If you would like an alternative solution, have a look at this one, by Alex Whittles from PurpleFrog. There are a few other ones, too, using IIF function or Utility dimension, but I did not like them as much.

Happy MDXing!


Apr 20

Kerberos Helper

I have blogged previously about setting up service principal names for your architecture. To make it really easy to form a perfect SETSPN statement first time and every time, I put together this little spreadsheet that will build the statements for you. It will also help you plan your installation architecture and service accounts.

All you need to do is to carefully fill in all the fields that appear in yellow and your SPN statements will be done. Just make sure you copy both NetBIOS based and FQDN based columns to a batch file and run it as domain admin.

This spreadsheet was a result of many hours of research, so if you feel that it helped you save time and money, please donate any amount you see fit using the button on the right. I will be happy to answer any questions you may have on it, please feel free to comment.

I provide the spreadsheet on “as is” basis and reserve the right to withdraw it at any time. You are free to use it in any way you like, but not distribute it or any part of it without my explicit written permission.

You can download the spreadheet from here. Please leave comment to this post if you found it helpful or indeed otherwise.

Apr 14

Service Principal Name (SPN): how to get it right?

Most of the difficulties in configuring Kerberos actually stem from this absolutely fundamental step in the process: getting your SPNs right. target They help Kerberos find the Service and the computer (Principal) that hosts that service. That is, SPNs are all about the target. They identify the destination of your journey, much like the address for the mail. In the case of Kerberos delegation, they are not about the client, not about the delegating service, but the service that serves the data you are trying to access. OK, there are exceptions, of course: most notably when you would like to use Kerberos to connect to the front end of your application (e.g. SharePoint Portal), but they only prove the rule, as in this case SharePoint portal is the target of your journey!

Now the second important concept to grasp is the syntax of the command commonly used to set up SPNs: SETSPN. Inline help is very generic and does not go very far in helping you with the syntax, but it tells you two very important things – you need to register your SPNs against an account and you need to identify the service:

setspn.exe -S <SPN> <AccountName>

Note here the use of the -S flag. If your domain is at 2003 functional level or later, this is the flag to use: it automatically checks for any duplicate SPNs before adding another. This is a very important step: you do not what to confuse your Kerberos about the destination of your journey! In pre-2003 domains have to check for duplicates explicitely and add SPNs with -A flag.

So how do we udentify the service? This is the hardest bit: there are thee components to this:

  1. Service class or description of the service
  2. Service Host name or DNS “A” record (not CNAMEs!)
  3. Port or Instance Name

Service class is merely an arbitrary label describing the service, naturally it should be commonly recognised, such as HTTP for a web site (IIS Service). Here’s some of the examples of service classes:

Service Classes (Types) Service Classes Description
MSSQLSvc SQL Server Database Engine
MSOLAPSvc.3 SQL Server Analysis Services
MSOLAPDisco SQL Server Analysis Services Discovery Service
MSSQLDisco SQL Server Browser
HTTP IIS service

The second part is the Host name: that is definition of where the service is running. This is normally a NetBIOS name of the computer that is hosting that target service, however in the case of IIS service, that should me the DNS “A” record of the Service your are authenticating to. Please bear in ming 15 character limit over Microsoft implementation of the NetBIOS: make sure your server naming conventions do not exceed this limit. NetBIOS name is one of the properties of your computer account in active directory. Easiest way to check NetBIOS name is is to display Computer Name tab of the host computer properties (right click on My Computer->Properties) and then click “Change…” -> “More…”. Checking DNS records is a little more involved, but your domain admin will be able to help.

The third part only applies to services running on non-default ports and named instances of Analysis Services. Naturally, you need to know if it is indeed the case. Table below should help you make a call:

Service Class Default Port How to tell if the service is set to default
SQL Server Database Engine 1433 Default instance is usually running on default port. Named instances usually set to dynamic allocation, check SQl Server Configuration manager
SQL Server Analysis Services 2383 If Port property of Analysis Services instance in Management Studio has a default value of 0, this means that the instance uses the default port, 2383
SSAS Discovery Service 2384 Usually Default ports are used. Check what ports are used by sqlbrowser.exe using “netstat -aob” command
SQL Server Browser 1434 Usually Default ports are used. Check what ports are used by sqlbrowser.exe using “netstat -aob” command
IIS service 80 Check web site properties in IIS Manager. If port is not specified as a part of URL for the web site, then it is usually default.

Now you should be able to form a correct SPN using the following syntax:

<service class>/<Host Name>[:<port or instance>]

Please remember to use instance name for the named instances of Analysis Services. If you try to specify a port number instead, it will be treated as a part of the host name and of course the call will fail miserably.

Finally, one needs to register this beautifully crafted SPN against an account. This account is the identity of the service you are trying to reach – the identity of the target service, the account that runs that service. Ideally, this is an Active Directory Service Account specially configure to be used as a service identity. If the service is running as “Local System” then this is Active Directory account of the computer that runs the service. Either way, in the SETSPN command it has to follow correctly formed SPN using the following syntax:

<domain name>\<account name>

 Thus a complete syntax of a correct SPN statement should be as follows:

setspn -s <service class>/<Host Name>[:<port or instance>] <domain name>\<account name>

To make it completely fool-proof, I have set up a very straightforward spreadsheet that you can download and use for building a perfect SPN statement every time. Please refer to KerberosHelper post to download.

Apr 14

Claims to Windows Token Service Identity

 C2WTSClaims to Windows token Service plays a very important role in SharePoint 2010 Kerberos delegation. It is a part of .Net Windows Communication Framework. It needs to be running on SharePoint application server in order for a SharePoint service to be able to get a User Windows token in exchange for a UPN (user principal name) claim. SharePoint uses claims for inter-farm communication. To authenticate to a service outside SharePoint farm that is not claims-aware, SharePoint needs to obtain a windows token to use for authentication. In the case of Kerberos delegation this token is then used to get Service ticket to present to the target service. The diagram here is a simplification of the complete process to highlight the role of C2WTS. For full details of Kerberos protocol mechanism, please refer to a far more serious source on Microsoft Technet

When C2WTS is not working properly, no delegation is possible, all calls to external services would fail. Inside SharePoint this failure would result in errors such as the following (see Rodney Viana’s blog):

"Could not retrieve a valid windows identity for NTName='Domain\Joe.Bloggs', 
UPN='Joe.Bloggs@Domain.com'. UPN is required when Kerberos constrained delegation is used."

Widely referred to Microsoft whitepaper on configuring Kerberos for SharePoint suggests changing the identity of Claims to Windows Token Service (C2WTS) to an active directory service account. While this seems like a good idea at first it may cause problems with stability of the service. It does not actually achieve anything in terms of making the configuration more secure as the account must be a member of the host local administrator group for the C2WTS to function correctly, so my recommendation is: leave it be as Local Service. Should you wish to change the delegation properties of other attributes for the identity of the service, please use the host computer account.

I had a case of C2WTS becoming unstable after changing identity from default Local System to an active directory account when configuring Kerberos constrained delegation for a SharePoint 2010 and SQL Server 2012 installation recently. Straigt after (re)start, the service would operate correctly, duly returning a windows token for a UPN claim, but after a few minutes it would inevitably get tired and begin returning an error, as if the user does not exist.

Changing the service identity back to Local System solved this problem: the system has been working reliably for many months now. Changing the identity back to a AD account brings back to problematic behaviour. 

Please bear in mind that if you have changed the identity of Claims to Windows Token Service in SharePoint, you need to tell SharePoint of your decision to undo that change. You can do so using SharePoint PowerShell console:

Get-SPServiceInstance | Where {$_.TypeName.StartsWith("Claims")} | 
ForEach-Object {
$_.Service.ProcessIdentity.CurrentIdentityType = 0; 


Mar 30

I am presenting at SQL Bits!

SQLBitsLogoSaying I am excited to present at SQL Bits is an understatement. I always enjoy sharing what I know with others, but now I have a perfect chance of sharing what I’ve learned about Kerberos delegation with UK SQL Server Community and the entire world! Click on the part-eaten disk array image to find out more…