DynamoDB Transactions
Overview
AWS DynamoDB transactions make it easier for developers to make synchronized, all-or-nothing updates to many objects inside and throughout the table. Transactions in DynamoDB enable atomicity, consistency, isolation, and durability, assisting users in maintaining data accuracy in the application. DynamoDB transactional reading and writing APIs may be used to handle complicated business operations that involve adding, modifying, or removing several objects in a single, all-or-nothing operation.
What are the Transactional APIs?
-
TransactWriteItems and TransactGetItems are two API methods that interact with the transaction. The first one, as the title suggests, is employed to write several objects in a single transaction. In a single transaction, the second may read multiple things. This transaction API supports up to 25 items in a query.
-
DynamoDB has long-featured batch-based APIs that allow you to work on several things at once. BatchGetItem can read a maximum of 100 things at once, while BatchWriteItem may write up to 25 items in one go.
-
There are two major distinctions between batch and transaction APIs. The first one is concerned with capacity usage. If users utilize the Transact APIs, they will indeed be invoiced for double the number of capacities that would be utilized if the activities were conducted without the need for a transaction.
-
As a result, customers will be charged for four write capacity units—two items of 1KB X 2 transactions—if you send TransactWriteItem requests that insert two items of less than 1 KB.
-
The second distinction between transact APIs and batch APIs concerns failure mechanisms. All reads and writes with the Transact APIs will succeed or fail simultaneously. Some requests may complete while others may fail via the batch APIs, and it is up to you to manage the failures.
-
There are several factors why a transaction request might cause failure. For example, one of the parts in a request may fail to owe to the request's criteria. You can include conditional expressions within any of the write-based requests. If such requirements are not met, the writer would fail, as well as the entire batch.
-
Secondly, a transaction may fail if any of the components are changed in other transactions or requests. If users make a TransactGetItems request on an item while another TransactWriteItems request is being performed on the items, the TransactGetItems requests would fail. A transaction problem is a sort of failure, and you may monitor CloudWatch Metrics on the amount of transactions clashes on your tables.
-
Lastly, a transaction might fail for a variety of reasons, including a lack of capacities in the table or the DynamoDB services generally.
Idempotency With Transactional Requests
-
AWS DynamoDB allows users to send a ClientRequestToken argument with every request to the TransactWriteItem API.
-
By including this option, users may assure that the request is idempotent even if it has been sent many times.
-
Consider the following scenario: Users are performing a TransactWriteItem request that involves several write requests to increase an attribute on an item.
-
Users may be in a dangerous situation if they have a network problem and are unsure whether the operation was successful or not. If you presume that the action was successful but it was not, the value of the attributes will be less than it ought to be.
-
If users presume the operation crashed when it didn't, you can resubmit the request, however, the attribute value will be greater.
Common Use Cases For DynamoDB Transactions
-
After we've covered the fundamentals of DynamoDB transactions, let's put them to use.
-
Note that DynamoDB transactions cost twice as much as a comparable non-transaction activity, therefore, we need to be cautious and utilize transactions only when necessary.
-
When is it OK to utilize transactions? I'll go through three of my favorite instances below:
-
Keeping your distinctiveness across many aspects counting and avoiding duplication didn't include an illustration wherein I required idempotency, as stated in the preceding section, but that's another acceptable use case of transactional APIs.
TransactWriteItems API
That API is indeed a synchronized and idempotent write operation that combines a maximum of 100 write operations into one all-or-nothing activity. Those activities can be used to address up to 100 different items inside one or more DynamoDB tables within a single AWS account or region.
The total size of the transaction's components never exceeds 4 MB. The acts are carried out in such a way that either all of them or none of them accomplish anything.
Several actions on the same item within the same transaction are not permitted. Users can't, for example, conduct both a ConditionCheck and a modify operation for the same item within the same transaction.
The transaction can include the following categories of actions:
-
Put initiates the PutItem operations to create a new item or replace an existing item with a new item, conditionally or unconditionally.
-
Update Begin a UpdateItem operation to modify an existing item's properties or to add a new item to the tables if one does not already exist. This action has been used to conditionally or unconditionally add, remove, or change properties on an existing object.
-
Delete Performs the DeleteItem operation on a particular element in a table defined by its main key.
-
ConditionCheck Verify if an item exists or the status of particular item properties.
TransactGetItems API
The API is indeed a synchronous reading operation that concatenates a maximum of 100 Get actions. These actions can be used to target up to 100 different items in one or even more DynamoDB tables within a single AWS account and region. The total size of a transaction's components cannot exceed 4 MB.
The Get activities are carried out in such a way either that all of these succeed or all these fail:
Get starts GetItem operations to get a collection of characteristics for the supplied main key item. Get doesn't return any data if no similar item is found.
Once a transaction completes, the changes made within that transaction are propagated to global secondary indexes (GSIs), streams, and backups.
Since propagation is not immediate or instantaneous, if a table is restored from backup (RestoreTableFromBackup) or exported to a point in time (ExportTableToPointInTime) mid-propagation, it might contain some but not all of the changes made during a recent transaction.
Idempotency
-
You can optionally include a client token when you make a TransactWriteItems call to ensure that the request is idempotent. Making your transactions idempotent helps prevent application errors if the same operation is submitted multiple times due to a connection time-out or other connectivity issues.
-
If the original TransactWriteItems call was successful, the subsequent TransactWriteItems calls with the same client token return successfully without making any changes.
-
If the ReturnConsumedCapacity parameter is set, the initial TransactWriteItems call returns the number of write capacity units consumed in making the changes. Subsequent TransactWriteItems calls with the same client token return the number of read capacity units consumed in reading the item.
-
A client token is valid for 10 minutes after the request that uses it finishes. After 10 minutes, any request that uses the same client token is treated as a new request. You should not reuse the same client token for the same request after 10 minutes.
-
If you repeat a request with the same client token within the 10-minute idempotency window but change some other request parameter, DynamoDB returns an IdempotentParameterMismatch exception.
Error Handling For Writing
Write transactions don't succeed under the following circumstances:
-
When a condition in one of the condition expressions is not met.
-
When a transaction validation error occurs because more than one action in the same TransactWriteItems operation targets the same item.
-
When a TransactWriteItems request conflicts with an ongoing TransactWriteItems operation on one or more items in the TransactWriteItems request. In this case, the request fails with a TransactionCanceledException.
-
When there is the insufficient provisioned capacity for the transaction to be completed.
-
When an item size becomes too large (larger than 400 KB), or a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
-
When there is a user error, such as an invalid data format.
Isolation Levels For DynamoDB Transactions
Transaction operations (TransactWriteItems or TransactGetItems) as well as other operations have the following isolating level.
SERIALIZABLE
Serializable isolations assure that now the outcomes of several simultaneous operations are identical as if neither operation began until the preceding one was completed.
Serializable isolations exist between both of the following sorts of operations:
-
After either transaction and any ordinary write operation (PutItem, UpdateItem, or DeleteItem).
-
Between any transaction and any normal read operation.
-
During TransactWriteItems and TransactGetItems operations.
-
While serializable isolations exist among transactional operations and each conventional write-in BatchWriteItem operation, there aren't any serializable isolations between the transaction and the BatchWriteItem function as a whole.
-
Any isolation levels among transactional operations and particular GetItems inside BatchGetItem operations are likewise serializable. However, the transactions and the BatchGetItem operations as a whole are read-committed.
READ-COMMITTED
-
Read-committed isolations guarantee that the reading operation always returns committed value systems for such an item - the reader would never give a view to the item reflecting a state from the failed transactional write.
-
Read-committed isolations don't prohibit changes to the item directly following the read operations.
-
The isolation layer is read-committed during each transactional operation as well as any read action involving multiple standards reads.
-
If transactional writing modifies an item amid the BatchGetItem, Search, or Scan activity, the succeeding read operations provide the freshly committed value or perhaps a previously committed result.
Handling Transactional Conflicts In DynamoDB
Parallel item-level requests to an item inside a transaction might result in a transaction clash. Transactional disputes can arise in the following cases:
-
The PutItem, UpdateItem, or DeleteItem request for an item clashes with a concurrent TransactWriteItems request for the identical item. A TransactWriteItems requests item is part of another TransactWriteItems request.
-
A TransactGetItems request includes an item that is part of a current TransactWriteItems, BatchWriteItem, PutItem, UpdateItem, or DeleteItem request.
-
For every rejected item-level request, the TransactionConflict CloudWatch statistic is increased.
Particular GetItem requests can be serialized in one of two different ways about a TransactWriteItems request: before or after the TransactWriteItems requests. Multiple GetItem requests on keys in parallel TransactWriteItems requests can be executed in any sequence, and the data are read-committed as a consequence.
-
For illustration, if a GetItem request for item A and item B is executed together with a TransactWriteItems request that alters simultaneously item A and item B, four outcomes are possible:
-
Both GetItem and TransactWriteItems requests are executed well before TransactWriteItems requests.
-
All GetItem requests are executed following the TransactWriteItems requests.
-
The TransactWriteItems requests are executed well before GetItem requests for element A. GetItem is called the following TransactWriteItems for item B.
-
The TransactWriteItems requests are executed well before the GetItem request for element B. GetItem is called the following TransactWriteItems for item A.
-
TransactGetItems should be used if a serializable isolation level is required for numerous GetItem queries.
Using Transactional APIs In DynamoDB Accelerator (DAX)
-
DynamoDB Accelerator (DAX) supports dual TransactWriteItems and TransactGetItems using the same isolation level as DynamoDB. TransactWriteItems generates DAX code. DAX sends a TransactWriteItems request to DynamoDB and delivers the result.
-
DAX executes TransactGetItems in the background for every item in the TransactWriteItems operations to replenish the cache after the writing, which uses extra read capability increments.
-
This feature allows users to keep the functionality simple and utilize DAX for both transactional and nontransactional activities.
-
Requests to TransactGetItems are routed via DAX even without items getting stored locally. This is similar to the performance of DAX's strongly consistent read APIs. Enabling transactions for the DynamoDB tables costs nothing more.
-
Users just pay for the read-and-write operations that occur as part of the transactions. DynamoDB subjected each item in the transactions to two fundamental reads or writes one to create the transactions and another to execute the transactions.
-
In the AWS CloudWatch measures, users can see the two basic read and write activities. While adding capacity to the tables, account again for the extra read and write needed for transactional APIs.
Capacity Management For Transactions
-
The cost of enabling transactions for the DynamoDB table is free. Only the read and write components of the transactions are liable for payment.
-
Every transactional item undergoes two fundamental reads or writes from DynamoDB: one to set up the transaction and another to commit the transaction. It is possible to see the two basic read/write operations in the AWS CloudWatch statistics.
-
While adding capacities to the tables, think about the additional read and write operations the transactional APIs will demand. Say, for instance, that each transaction in the application writes three 500-byte objects to the table at a rate of one per second.
-
Every item needs two write capacity units (WCUs): one for transaction preparation and one for transaction committal. Six WCUs would thus have to be provided at the tables.
-
Assume that the program conducts one transaction every second, for each transaction write three 500-byte objects to the table. Every item necessitates the use of two write capacity units: somebody to prep the transactions and another to commit them. As a result, you'd need to bring six WCUs to the table.
-
In the above example, one would utilize two read capability units (RCUs) for every item in the TransactWriteItems method if you used DynamoDB Accelerator (DAX). As just a result, one needs to add six more RCUs to the table.
-
Similarly to this, one would have to allocate six read capacities units (RCUs) to a table if the application performs one reading transaction per second, reading 3 500-byte items from the database for each transaction. Every item needs two RCUs to be read: one to set up the transactions and another to commit them.
-
Additionally, the usual SDK response in the event of a TransactionInProgressException error is to retry the transaction. Consider how many extra read-capacity units (RCUs) such multiple tries will use. The same holds valid if using a ClientRequestToken to retry a transaction within its code.
Best Practices For Transactions
-
While utilizing DynamoDB transactions, keep in mind the following suggestions. These two read or write operations for each item in the transaction must be performed twice, either enabling automatic scalability for the table or making sure users have provided adequate throughput.
-
When you're not utilizing an AWS-provided SDK, ensure the requests are idempotent by adding a ClientRequestToken element to the TransactWriteItems method.
-
When it isn't essential, avoid combining processes into one transaction. For instance, we advise breaking up one transaction containing 10 activities into multiple transactions if doing so won't jeopardise the accuracy of the applications. Simplified transactions have a higher success rate and increase throughput.
-
For every item inside the TransactWriteItems call, 2 read capacity units (RCUs) would also be used if DynamoDB Accelerator (DAX) had been used in the preceding example. Therefore, one would have to add six more RCUs to the table.
-
According to this, users would have to allocate six RCUs to the table if the application performs 1 read transaction per second, reading three 500-byte items from your database for each transaction. Every item needs two RCUs to be read: one to set up the transactions and another to commit them.
-
Additionally, the usual SDK behavior in the event of a TransactionInProgressException error is to retry the transaction. Consider how many extra read-capacity units (RCUs) such retries will use. The same holds if you utilize ClientRequestToken to retry transactions within your code.
Using Transactional APIs With Global Tables
-
Within the zone in which the initial writes were done, there are atomic, consistent, isolated, and durable (ACID) guarantees offered by transactional operations.
-
Global tables do not support transactions across regions. For example, if users conduct TransactWriteItems operations in the US East (N. Virginia) Region of a global table that has a replica in the US East (Ohio) and US West (Oregon) regions, they can see partially completed transactions in the US West (Oregon) Regions while modifications are replicated.
-
Whenever modifications have indeed been published in the originating region, they will only be copied to additional regions.
-
The conflict that results from several transactions concurrently modifying the same objects might force the cancellation of the transaction. To reduce these conflicts, they advise adhering to DynamoDB's recommended practices for data modeling.
-
Try to combine the characteristics into a particular element if a collection of the attribute is frequently modified over several items as part of the same transaction to narrow the range of the transactions.
-
Avoid absorbing data in bulk via transaction. The use of BatchWriteItem is preferable for bulk writing.
DynamoDB Transactions vs The AWS Labs Transactions Client Library
The AWSLabs transactions client library may be replaced with DynamoDB transactions, which are more economical, reliable, and efficient. To utilize the native, server-side transaction APIs, it is advised to upgrade the apps.
There is no additional cost to enable transactions for your DynamoDB tables. You pay only for the reads or writes that are part of your transaction.
DynamoDB performs two underlying reads or writes of every item in the transaction: one to prepare the transaction and one to commit the transaction. These two underlying read/write operations are visible in your Amazon CloudWatch metrics.
Conclusion
-
TransactWriteItems and TransactGetItems are two API calls that handle the transaction. The initial is used to write numerous things in a single transaction, as you could infer from the name. To read many things in a single transaction, utilize the second.
-
DynamoDB lets users include a ClientRequestToken argument in the requests for such TransactWriteItem APIs. By including this option, users can make sure that the request is idempotent even when it is sent more than once.
-
Ensuring originality across many characteristics, managing count and avoiding duplication, and approving a user to take a certain action are the typical use cases for DynamoDB Transactions.
-
With the transaction write API, you can group multiple Put, Update, Delete, and ConditionCheck actions. You can then submit the actions as a single TransactWriteItems operation that either succeeds or fails as a unit. The same is true for multiple Get actions, which you can group and submit as a single TransactGetItems operation.