Worth Corner, Crawley, RH10 7SL

Support Hours - Mon-Sat, 8.00-18.00

Welcome to our SQL Server blog

This blog was created in order to pass on some of our knowledge and advice in the hope that the community might find it useful.

Obviously Microsoft SQL Server is our area of expertise and therefore this is our most thorough blog with everything from the basic through to the the incredibly advanced and, as such, we should have something for everyone within our pages.


Downsides to In-Memory Tables
Published: Dec 03, 2020
Had I written this post just a few versions ago when in-memory was released as part of SQL Server 2014 then this would have been a very long post indeed. However, I’m pleased to say that, just a few iterations later, this feature has been improved considerably and is definitely something I now consider when looking to re-architect or tune an environment.

Having said that, there are still a couple of flaws that I do need to point out. Well, not flaws, there are design reasons behind them, but they can be an instant red flag to people when considering an in-memory solution.

In my system I’ve reset my configuration to defaults:


Therefore I’ve now got all memory available to me. Okay, I’m only on a Surface Pro at the moment rather than my test server, but it still has a respectable 8GB RAM in it.

Now, we’re going to throw some data into memory:

use test
go

drop table if exists testMem
go

create table testMem
(
      
SalesOrderID int not null,
      
SalesOrderDetailID int not null,
      
CarrierTrackingNumber nvarchar(25) null,
      
OrderQty smallint not null,
      
ProductID int not null,
      
SpecialOfferID int not null,
      
UnitPrice money not null,
      
UnitPriceDiscount money not null,
      
LineTotal decimal(19, 10) not null,
      
rowguid uniqueidentifier not null,
      
ModifiedDate datetime not null,
      
index ix_salesOrderMem (salesOrderDetailID)
)
with (memory_optimized = on, durability=schema_only)
go

insert into testMem
select *
from AdventureWorks2012.sales.salesOrderDetail



Ooops. Well that didn’t work, did it? Issue number one is that you cannot cross database when using in-memory tables. Everything you’re joining to or accessing must be within the same database. This can be an issue for some companies where they’ve purposely separated data across databases or ETLs where cross database validation lookups are taking place.

Well, luckily we CAN access tempDB and therefore we can get around this for this demo by tweaking the insert as follows:

drop table if exists #temp

select *
into #temp
from AdventureWorks2012.Sales.SalesOrderDetailEnlarged

insert into testMem
select *
from #temp


So… we now have data and it’s being held in-memory. My server currently looks as follows:


This is a large table that I’ve put into memory and therefore it’s now holding a lot of the server’s RAM. This is completely expected and shouldn’t get in the way because we’ve still 30% of the server’s RAM available to us…

Now, in a new window we’re going to read this data on a loop (mimicking a lot of reads taking place in which share locks are rarely dropped):

while 1 = 1
begin
       select
* from testMem
end


So now let’s make a few amendments to the table (okay, a lot of amendments, but let’s face it, ETLs do a lot of updating so this isn’t outside the realms of possibility):

update testMem
set UnitPrice = UnitPrice * 1.2
go 50



As you can see, we ran out of memory pretty quickly. And it’s worth noting that this isn’t just this query running out of memory, it’s the whole SQL Server instance running out of buffer pool and ceasing to function.

Basically this is down to the way in which Snapshot Isolation is implemented for in-memory tables. Think of it like this… when performing an update SQL Server updates the “end timestamp” on the updated record and creates a copy of the record with a “from timestamp” indicating it is the new record.

Each reading transaction is given a timestamp and therefore, when in-memory records exist with an end timestamp greater than the oldest reader, the records are cleanup up behind the scenes. However, in the fast moving environment this may well not happen quickly enough (such as an ETL) and suddenly you are in a situation in which everything comes to a grinding halt.

This is definitely something that you need to look out and test for when determining if in-memory is right for you. A simple rule of thumb to guide you is that if you have a lot of static lookup tables or tables which change very little then in-memory may be good, but if you have tables that change a lot and are also read a lot at the same time, then you need to be very careful indeed.

Leave a Comment
Your email address will not be published. All fields are mandatory.
NB: Comments will only appear once they have been moderated.

SQL  World  CEO
Kevin  Urquhart

iPhone Selfie

I am a SQL Server DBA, Architect, Developer, Trainer, and CEO of SQL World. This is my blog in which I’m simply trying to share my SQL knowledge and experiences with the world.

Categories


© Copyright 2020 SQLTraining Ltd.