SQL Server Avoid data loss on update while using Memory-Optimized Tables

oyjwcjzk  于 12个月前  发布在  其他
关注(0)|答案(1)|浏览(128)

My company main software have 90% of the business logic on the database (stored procedures). With the last version, Memory-Optimized Tables have been introduced to replace temporary tables used to pass data between procedures. The software used to rely on SNAPSHOT isolation to avoid losing data due to concurrent transaction updating the same records, but now the isolation level has been changed to READ COMMITED SNAPSHOT (in order to use MO Tables).

Since a LOT of the software logic looks like this:

DECLARE @IdLocationsToUpdate TABLE (Id BIGINT)

INSERT INTO @IdLocationsToUpdate (Id)
SELECT Id FROM Warehouse.LocationEntity WHERE ..... -- Complex conditions or other inner joins

UPDATE l
Warehouse.LocationEntity l
INNER JOIN @IdLocationsToUpdate lu ON lu.Id = l.Id

-- Plus multiple other updates and logics on that list of Ids

We are now having issue with UPDATES due to the missing SNAPSHOT. That's because the records selected and inserted into the table variable, may have been updated by another transaction and the update statement should not be executed now since the conditions are not met anymore by some of those records. Before, this was avoided by the exceptions thrown by the SNAPSHOT isolation that didn't allow the multiple update on the same record.

What do you suggest to do?

Hope the explanation was clear enough. Thanks in advance.

A first idea would be to rewrite the WHERE condition used in the first select, in all the subsequent updates, but that would make the code way longer and confused. So i was hoping for a solution to remove the data loss while avoiding duplicated code, or a way of re-enabling snapshot isolation.

cuxqih21

cuxqih211#

The obvious thing to do for starters is to begin and commit a transaction.

But you also need to add the UPDLOCK hint otherwise it doesn't know you still want those rows to be modified.

SET XACT_ABORT ON;  -- always use this for better error handling and rollback

BEGIN TRAN;

DECLARE @IdLocationsToUpdate TABLE (Id BIGINT PRIMARY KEY);  -- ideally add a primary key

INSERT INTO @IdLocationsToUpdate (Id)
SELECT Id
FROM Warehouse.LocationEntity WITH (UPDLOCK)
WHERE ..... -- Complex conditions or other inner joins

UPDATE l
Warehouse.LocationEntity l
INNER JOIN @IdLocationsToUpdate lu ON lu.Id = l.Id

COMMIT;

Quite why you are using temp tables in the first place is not clear: the above simplified code (even with "complex" joins) certainly doesn't need it. Most code which uses temp tables doesn't actually need it, and needlessly complicates things and causes performance issues.

相关问题