One common use case for the cloud now is for hosting data lakes. To save on storage costs you can store your data in a compressed format but in many cases it will be much more convenient to have data that is ready to be used right away and storage costs are quite small - especially when you're not dealing with millions of files and TB's of size. Using an event-driven serverless approach to handle this seems like it would be an ideal fit in many cases. The example below from Darren Roback shows how you can use events from S3 to trigger AWS Lambda code to decompress archive files as they are uploaded to S3. https://lnkd.in/e856MZ6z
Darryl R.’s Post
More Relevant Posts
-
One common use case for the cloud now is for hosting data lakes. To save on storage costs you can store your data in a compressed format but in many cases it will be much more convenient to have data that is ready to be used right away and storage costs are quite small - especially when you're not dealing with millions of files and TB's of size. Using an event-driven serverless approach to handle this seems like it would be an ideal fit in many cases. The example below from Darren Roback shows how you can use events from S3 to trigger AWS Lambda code to decompress archive files as they are uploaded to S3. https://lnkd.in/e856MZ6z
How to Extract ZIP Files in an Amazon S3 Data Lake with AWS Lambda
medium.com
To view or add a comment, sign in
-
One common use case for the cloud now is for hosting data lakes. To save on storage costs you can store your data in a compressed format but in many cases it will be much more convenient to have data that is ready to be used right away and storage costs are quite small - especially when you're not dealing with millions of files and TB's of size. Using an event-driven serverless approach to handle this seems like it would be an ideal fit in many cases. The example below from Darren Roback shows how you can use events from S3 to trigger AWS Lambda code to decompress archive files as they are uploaded ot S3. https://lnkd.in/e856MZ6z
How to Extract ZIP Files in an Amazon S3 Data Lake with AWS Lambda
medium.com
To view or add a comment, sign in
-
One common use case for the cloud now is for hosting data lakes. To save on storage costs you can store your data in a compressed format but in many cases it will be much more convenient to have data that is ready to be used right away and storage costs are quite small - especially when you're not dealing with millions of files and TB's of size. Using an event-driven serverless approach to handle this seems like it would be an ideal fit in many cases. The example below from Darren Roback shows how you can use events from S3 to trigger AWS Lambda code to decompress archive files as they are uploaded ot S3. https://lnkd.in/e856MZ6z
How to Extract ZIP Files in an Amazon S3 Data Lake with AWS Lambda
medium.com
To view or add a comment, sign in
-
💥 New from AWS - Amazon S3 Tables 💥 Now we can create Iceberg tables (Parquet) as first-class resources directly in S3 !! - now S3 is the first cloud object store with built-in Apache Iceberg support - up to 3x faster query performance. - up to 10x higher transactions per second compared to self-managed Iceberg tables in general-purpose S3 buckets. - S3 handles automatic table maintenance and snapshot expiry https://lnkd.in/g_QJCrNM #aws #s3 #bigdata
New Amazon S3 Tables: Storage optimized for analytics workloads | Amazon Web Services
aws.amazon.com
To view or add a comment, sign in
-
If you’re ever browsing through the different AWS services and offerings, you might have come across AWS State Machines and gotten scared by the name. In reality, the state machine service allows you to chain events easily and seamlessly. In addition to its usage to chain other AWS services together, State Machines have some cool integrations up their sleeves, which can save you writing your own logic. Less moving parts means less stuff that can break! I wrote a two part blog about real life scenarios you can solve using AWS state machines. Scenario 1: How to log CloudTrail events to DynamoDB by using a State Machine to write directly to DynamoDB. Scenario 2: How to trigger a lambda function multiple times in parallel. (parallel data processing using lambda and state machines) I hope these blogs can help you dip your toes into state machines, or help you solve a problem for your business! https://lnkd.in/ed5epSnu https://lnkd.in/eaUYy8UT #aws #serverless #cloud #tutorial #awscommunity
Log CloudTrail events to DynamoDB using AWS State Machine
dev.to
To view or add a comment, sign in
-
AWS has announced a significant price 💰 reduction in 𝗗𝘆𝗻𝗮𝗺𝗼𝗱𝗯: • 50% reduction in on-demand pricing • 67% reduction for global tables These updates make on-demand provisioning an even more attractive and cost-effective option for many workloads. This is huge news for everyone using Dynamodb. https://lnkd.in/dfXct--X
New – Amazon DynamoDB lowers pricing for on-demand throughput and global tables | Amazon Web Services
aws.amazon.com
To view or add a comment, sign in
-
Diving into the AWS Cloud with a NOSQL database often leads you to DynamoDB. Despite following AWS's best practices for partition designs, you might still face hot partitions, especially in read-heavy scenarios like using DynamoDB as a shared cache layer. For more insights, subscribe to our newsletter: www.devopsbulletin.com
DynamoDB and It’s Partition Strategy For Read Heavy Use-case
devarpi.medium.com
To view or add a comment, sign in
-
In my latest blog post, I provide a step-by-step guide on how to use AWS DataSync to transfer data from an EFS file system to an S3 bucket. https://lnkd.in/ehifuP8A
How to migrate data from Amazon EFS to Amazon S3 with AWS DataSync
tecracer.com
To view or add a comment, sign in
-
🎉 After an extensive journey through AWS DynamoDB, my learnings on this topic have been vast. I've compiled all these insights into a comprehensive blog post that delves deep into DynamoDB operations, from the basics to advanced functionalities. 📚✨ While this may not be the ultimate guide, I believe there's value in every story. 🚀Let's navigate these learning curves together📚 https://lnkd.in/gm5xQ8if #aws #dynamodb #awscloud #awscommunity #awslearning #awsugmdu #cloud
Operations on DynamoDB
medium.com
To view or add a comment, sign in
-
Launch day! Amazon S3 Tables deliver the first cloud object store with built-in Apache Iceberg support, and the easiest way to store tabular data at scale. S3 Tables are specifically optimized for analytics workloads, resulting in up to 3x faster query throughput and up to 10x higher transactions per second compared to self-managed tables. Additionally, S3 Tables are designed to perform continual table maintenance to automatically optimize query efficiency and storage cost over time, even as your data lake scales and evolves. This one is huge. We are very excited to get S3 Tables into the hands of customers. https://lnkd.in/gUpYEdGq
Announcing Amazon S3 Tables – Fully managed Apache Iceberg tables optimized for analytics workloads - AWS
aws.amazon.com
To view or add a comment, sign in