Amazon DynamoDB is a NoSQL fully cloud-managed database solution for non-relational data with single-digit millisecond latency. The database solution is flexible and has reliable performance; however, it does not include an in-built option for exporting data stored in tables to S3.
Amazon does allow DynamoDB items to be exported to S3 using AWS Data Pipeline, although not without the use of additional AWS services. Further, even with these extra services, it is problematic exporting multiple tables across multiple regions.
There are several reasons for exporting DynamoDB tables to S3. Doing so allows table data to be accessed by AWS services such as Amazon Athena, which would otherwise not be possible. Exporting and storing DynamoDB tables on S3 can also ensure data can be recovered in the event of disaster.
Due to the problems exporting multiple DynamoDB tables, Skeddly has developed a convenient and easy to use solution that will allow its users to export multiple DynamoDB tables to a S3 bucket.
Skeddly announced the new export feature today and says users can export all tables or selections of tables based on resource tag or table name comparisons. Tables can also be selected across multiple regions with ease.
Users are also not restricted to exporting tables to the same AWS account. It doesn’t matter if the S3 bucket is in a different name to the AWS account with the DynamoDB tables. The export will still work.
Currently, the export action uses JSON data (UTF-8 encoded) which is similar to DynamoDB APIs. Users who would like to use this function, but are prevented from doing so due to the format of exported data, have been requested to contact Skeddly and advise the firm of the data format they would prefer. If there is sufficient demand, Skeddly will consider adding new data formats to its export function in the future.
The export function is currently free of charge to use while it is in preview.