With the advent of big data, we have more and more technological solutions coming out to ensure a better data management. Monitoring databases is an important task in any DBA, Consultant or even Developer’s process, as they make sure any application or website runs as smoothly and swiftly as possible; any delay in getting the needed information could create big problems, and thus the need for better database optimization.

In this article, I will talk to you about a highly recommended way of optimizing the performance of your SQL Server Database:

SQL Server In-Memory OLTP:

It is the SQL Server technology for improving performance of transaction processing, also known as “Hekaton” and “In-Memory Optimization”. It was first introduced in SQL Server 2014 (Enterprise & Developer), and further enhanced with SQL Server 2016, 2017 and Azure (Premium tier).

In-memory OLTP is made of components that are different than what we see with a normal SQL Server database, the first thing being something called a “Memory optimized file group”. One of the differences between this and a normal SQL server file groups is: you just give a path to a folder where you want the file group to be saved, and inside this file group will be written a series of checkpoint files:

  • These files are independent of your database files (MDF, LDF), and they will be stored in a separate folder that you will specify when creating or altering the database.

You are then going to create what we call “Memory optimized tables”. And for that, you have two options:

  • use a non-clustered index, every MOT must have at least one index (that is also memory optimized), because it is the indexes that connect the rows together.
  • use a hash index (read SQL Server Index Design)

Memory-optimized tables?

Memory-optimized tables store all their data in memory with no need to put anything in the disk. Eliminating the disk-based component of the table (including locking and latching), results in a significantly improved performance.

When creating in-memory tables, you can choose one of the two types:

  • Durable, where transactions are written to memory and to the transaction log (DURABILITY=SCHEMA_AND_DATA).
  • Non-durable, the transactions are only written to memory (DURABILITY=SCHEMA_ONLY).

In both cases, all the read/write operations are being done directly in the memory once the SQL Server engine process is running.

Note: By default, if a table is created without specifying a type, a durable memory-optimized table will be created.

But now you must be wondering, since we no longer store data in the disk, and the memory is volatile, what happens if the server shuts down?

Luckily, the durable tables have the same behavior as the traditional disk-based tables, so all the data is safe in case of a server restart or crash because it will be recovered from the transaction logs. The same however, cannot be said for the data stored in non-durable tables, although they offer a better performance, they can only recreate the table structure. That said, it does not mean they cannot be of use; we can work with this type of tables when handling temporary tables holding large amounts of data.

Note: We can use both durable and non-durable tables in the same database.

Hekaton was made to speed transactional operations up to a 100 times faster than usual, but so far it can only reach up to 30 times faster processing. To be able to get to this number, you will need to use “Natively compiled stored procedures”.

Natively complied stored procedures?

They are Transact-SQL stored procedures compiled to machine code. They are also compiled at create time, whereas interpreted stored procedures are compiled at first execution time and are recompiled if the execution plan is no longer in the plan cache. Natively compiled stored procedures however, are recompiled on first execution of the procedure if the database or the server is restarted (manual recompilation: sp_recompile).

Note: If you want the peak maximum performance: Use natively compiled stored procedures.

Some In-Memory OLTP use cases:

(read In-Memory OLTP usage scenarios)

Some limitations in SQL Server 2014 In-Memory OLTP:

  • Maximum memory for memory-optimized tables: 256 GB
  • Alter tables: not supported
  • Alter natively stored procedures: not supported
  • Large objects: not supported
  • Indexes on nullable columns

(See more SQL Server 2014 In-Memory OLTP Limitations)

All the limitations above have been lifted with SQL Server 2016.

Determine which tables to put in-memory?

There is a way to help you quickly determine which database tables can benefit from a migration to memory optimized tables. This tool is available in Microsoft SQL Server Management Studio (SSMS):

First, you right-click on your table and choose Memory Optimization Advisor

Then you will see a wizard that looks like this:

Next step: Migration validation:

If all items are validated, you move on to the next step where you might get some migration warnings before you specify some options for memory optimization, such as your filegroup name and location, select one of the two index types mentioned above for your primary key and index, and finally, start your migration process.

It also generates a migration script that you can get in the summary section where you verify the previous actions as well.

If you already have a database and you would like to know if it contains any memory optimized objects, SSMS offers another tool called Memory Usage By Memory Optimized Objects:

This brings up a report that provides details on the utilization of memory space by memory optimized objects within your database: (for instance here, you can see the memory optimized table name along with its memory space details)

You can also use SSMS to assess risks and evaluate incompatibilities of your database tables and stored procedures for memory optimization:

After that, a wizard will pop up showing a small introduction. Next, you choose one of the Checklist Generation Options:

If everything goes well, you will find your checklist in the directory to which you had saved it, in a folder under the name of your database:

Note: Depending on the objects you have chosen, you will find inside their corresponding subfolder an HTML document that contains the checklist.

Conclusion

In this first article, we have learned what In-Memory OLTP is and what it entails (MOTs, natively stored procedures…). We have also seen how SSMS can help us determine which tables in our databases can be migrated to MOTs, and improve our database performance significantly. This SQL Server built-in technology that has first come out in 2014, has known some limitations as well as improvements in later versions. These changes resulted in many use cases that I will elaborate more on in later articles.

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.