Let's simplify publishing new NuGet packages for x++ builds
Transaction scope that’s too big (locks, blocks, and 2 smoking deadlocks)
Don't use transaction blocks that are too large
It's easy to create large loops all in a transaction that do various things and sometimes this is just the result of scope creep or perhaps poor planning. But generally we want to keep transaction commit sizes small - if we have control over that which we may or may not but lets walk through an example.
ttsBegin;
while select forUpdate someTable
{
// does selects, calls services, heavy logic...
// maybe even does infolog, sleeps, file IO, etc.
// consider that each loop iteration consumes 1 second of execution timesomeTable.update(); //Don't Use DoUpdate
}
ttsCommit;
There are a few ways to prevent this.
First, keep your ttsBegin/ttsCommit as tight as possible - as little time between when opening and closing the transaction. This could mean simply moving in your code to be near the actual update(). This can have some unintended consequences which would require testing but it can sometimes be that easy.
Another option to stage all the updates as much as possible then do the work. First identify what needs to be updated, how it needs to be updated, then actually do the update. So, we'd identify what needs to be done, then we do it. This can be done with InsertRecordSets or Update_RecordSets called from values stored in some kind of temporary storage - lots of different options. This could be modeled after, or even use, a top picking batch job pattern. Similar to the last option, we're simply allowing less execution time during when then transaction is open - between the ttsBegin and the ttsCommit. One implementation of this that is currently in use is WHSRecordDeletionCommitter and WHSRecordUpdateCommitter so you can how MSFT is currently keeping commit sizes constrained while also doing a large operations against multiple buffers.
Lastly, as mentioned before, use a batch job. There are 3 main "types" available plus the "classic" option: Batch Bundles, Individual Task Modelling, and Top-Picking. Using a batch job allows for the decomposition of work units to a much higher degree and a lot more freedom to design a solution because we don't have a user waiting at the other end of the process to complete. We can design a UI for a user to select what to process then the batch breaks things down into manageable units and processes them however we decide.
A really simple pattern for commit tracking could look like this:
int i;
while select forUpdate someTable
{
ttsBegin;
// update one row (or small set)
someTable.doUpdate();
ttsCommit;
if (++i % 1000 == 0)
{
// optional: checkpoint / log progress
}
}
Yes, you lose “one big atomic transaction,” but you gain less blocking, fewer deadlocks, more predictable runtime executions, and recoverability of the dataset being acted upon ( meaning if it fails half way through, half of your records where still updated ). An example of using a commit tracker is:
using (var resetInstrumentationActivitiesContext = this.activities().resetExecuting())
{
using (var committer = WHSRecordUpdateCommitter::construct())
{
WHSShipmentTable shipment;while select forupdate shipment
order by shipment.ShipmentId
where shipment.OrderLineInventTransLinkType == WHSShipmentOrderLineInventTransLinkType::PickingRoute
{
shipment.OrderLineInventTransLinkType = WHSShipmentOrderLineInventTransLinkType::None;if (!this.resetShipment(committer, shipment))
{
break;
}
}
}this.activities().parmResetCount(resetInstrumentationActivitiesContext, resetCounter);
}











