This article is more than 1 year old

Microsoft's SQL Server 2014 early code: First look

Welcome to the Hekaton

In-memory database engine, improved integration with Windows Azure, and new indexing technology for high performance data warehousing applications - there's plenty to like in SQL Server 2014, released to manufacturers on Tuesday.

But while Microsoft has been busy and done some heavy lifting, the code that will become generally available on 1 April has some glaringly rough edges.

Let's tackle the best bit first - I looked at the CTP2 and saw later builds and tried them in a hands-on lab.

The in-memory database engine codenamed Hekaton is the most eye-catching feature, thanks to the dramatic performance increase it can offer – up to 30 times, according to Microsoft. The feature has been in development for five years, program manager lead Kevin Liu told journalists on the SQL Server 2014 workshop I attended.

The database engine is new code which accesses data directly in memory, uses a high level of concurrency, and compiles stored procedures to native code for further optimisation. A copy of the data is streamed to disk for persistence, though you can disable this for maximum performance if you do not care about losing data.

The performance benefit is real. Even on a modest virtual machine running on Azure (four cores, 7GB memory) I saw the time for inserting 100,000 orders, each in its own transaction, decline from two minutes and 54 seconds to 36 seconds after switching to in-memory tables.

Another plus is integration. You can mix in-memory and disk-based tables in a database, though querying across both is inefficient.

There are limitations though. The most severe is a long list of T-SQL keywords that are not supported for in-memory tables, including IDENTITY, UNIQUE, OUTER JOIN, IN, LIKE, DISTINCT and other common commands, Triggers and BLOB fields. Workarounds are suggested, but this does mean a porting effort in order to take advantage.

SQL Server 2014

SQL Server 2014: all the features, but can they be used? (click to enlarge)

There are some other limitations for this first release. One is a recommendation that in-memory data does not exceed 256GB.

“Rest assured, that is something we will bump up drastically in the next release,” Liu said.

The other is that “the recommendation for hardware is two sockets” to avoid issues with NUMA (Non Uniform Memory Access) that affect performance.

The best fit for using in-memory tables is where business logic is in stored procedures and client-server communication is not too chatty. Applications that implement business logic in external code are not optimal.

Microsoft is also making a big deal of new Azure integration. There are several possible scenarios. You can mount database files that are in Azure storage; the latency makes this unsuitable in many cases, though SQL Server will cache the most active data - but it can be useful for archiving.

Of wider use is the ability to backup to Azure storage, which is now built-in. In Management Studio you can select URL as a backup destination, which prompts for Azure credentials. There is also a new Managed Backup tool, aimed at smaller organisations, which will automatically backup databases to Azure storage. You only need configure the credentials and the data retention period.

Another Azure feature is replication to SQL Server databases running on Azure VMs. An Add Azure Replica wizard sets up always-on availability.

SQL Server 2014 resource governor

Resource Governor lets you limit resources to specific users (click to enlarge)

More about

TIP US OFF

Send us news


Other stories you might like