Getting an opportunity to be part of open-source contributions is a wonderful feeling that is also immensely gratifying. SageRender was born when we were faced with the daunting challenge of needing to migrate our ETL and ML workloads to AWS SageMaker from a legacy platform. We wanted to see if there was a way to use simplified abstraction methodologies to accelerate the migration of N pipelines with non-linear effort, and this was a result of that desire. Huge congrats to all the tireless contributors to this project and a special shout-out to Mohamed Abdul Huq Ismail. repo: https://lnkd.in/gxrhNDZn container: https://lnkd.in/ghpxk3np
Divyanshu Narendra’s Post
More Relevant Posts
-
Combining fewer components to build your application can save you time but may not always be the best approach. API Gateway on AWS has a number of direct integrations with other services but you may need to write mapping templates using Velocity Template Language (VTL). You can combine most AWS components using Lambda functions and write the code needed to transform data if needed. Some components allow you to more directly interact without having a Lambda function. The example below from Khai Bui shows how you can directly integrate API Gateway with DynamoDB but you have build mapping templates to make it work. You are using fewer services but you have to determine if having those VTL templates is worth it for the overall solution. https://lnkd.in/e-5ZMH9P
Deploy AWS API Gateway to read and update DynamoDB without Lambda
medium.com
To view or add a comment, sign in
-
In this post I used terraform to create a graphql api using AWS AppSync, and we used JavaScript instead of Apache Velocity Templates for one of our resolvers, for another resolver we use lambda function as a data source. https://lnkd.in/dE_RzjgG #aws #terraform #lambda #graphql
Create an AppSync API using Terraform
valdes.com.br
To view or add a comment, sign in
-
Master DynamoDB Core Concepts
https://www.youtube.com/
To view or add a comment, sign in
-
Introducing Amazon MWAA micro environments for Apache Airflow | AWS Big Data Blog: The micro environment class. The mw1.micro configuration provides a balanced set of resources suitable for small-scale data processing and ... #bigdata #cdo #cto
Introducing Amazon MWAA micro environments for Apache Airflow
aws.amazon.com
To view or add a comment, sign in
-
How to create a Node.js microservice and persisting data in DynamoDB ? #NodeJS #Microservices #AWS #DynamoDB #Serverless #TechBlog #LearnAndGrow #DeveloperCommunity
How to create a Node.js microservice and persisting data in DynamoDB
medium.com
To view or add a comment, sign in
-
Day 28 of #40daysofkubernetes Today, I took a deep dive into Docker volumes and explored how we can ensure a data persistence. Docker volumes are crucial for maintaining data integrity and reliability. Check out my latest blog to know about Docker volumes. Special thanks to Piyush sachdeva and The CloudOps Community #Kubernetes #AWS #Docker #DockerVolume
Master Docker Volumes: Day 28 Guide
rahulvadakkiniyil.hashnode.dev
To view or add a comment, sign in
-
Ever deployed on a Friday? Or refactored code without tests? Let’s be honest, we’ve all been there! Patryk Kaczmarek is turning those classic developer mishaps into something productive with a fun service to log your fails and lessons learned. In Part 2 of his step-by-step tutorial series, discover how to level up a #serverless API on #AWS with #DynamoDB for data persistence—all powered by #Terraform. 👉 Read the blog post here: https://lnkd.in/dNXHxc7e #AWSLambda, #ServerlessComputing, #DevOpsTutorial, #DynamoDB
[102] Enhancing Your AWS Serverless API with DynamoDB for Data Persistence
move2edge.com
To view or add a comment, sign in
-
Learn by example! Using AWS #serverless components to build a data processing pipeline is a really good fit in many cases. Tools like Glue and Athena combined with AWS Lambda, S3, API Gateway and more can be used to build a nice platform. Here Dr. Yaroslav Zhbankov details how to set all this up using GitHub Actions and Terraform Infrastructure as Code and he even includes the source code in a GitHub repo. Explore the code and try it out for yourself. Maybe this will be the start of a similar project you have been thinking about. https://lnkd.in/eUEZ-gdn
Architecting Scalable Data Analytics: Harnessing AWS Athena, Glue, S3, Lambda, and API Gateway
medium.com
To view or add a comment, sign in
-
Amazon Web Services (AWS) team shows how to get started with Apache XTable on AWS and how to use it in a batch pipeline orchestrated with Amazon Managed Workflows for Apache Airflow. This is an amazing hands on blog for anyone looking at orchestrating metadata translation between lakehouse table formats (Apache Hudi, Apache Iceberg & Delta Lake) with XTable. Blog: https://lnkd.in/daGfR9mx Credits: Matthias Rudolph, Stephen said #dataengineering #softwareengineering
Run Apache XTable on Amazon MWAA to translate open table formats | Amazon Web Services
aws.amazon.com
To view or add a comment, sign in
-
After obtaining my Terraform certification recently, I delved into its applications beyond AWS. Excited about our data platform project at work, I automated Snowflake resource deployments using Terraform and GitHub actions. Explore my latest blog post on automating Snowflake resource deployment with Terraform and GitHub Actions. Special thanks to Sean Kim for his helpful YouTube video that inspired this blog. #snowflake #devops #terraform #github #aws Read more here: https://lnkd.in/g8ZSstgY
Automating Snowflake Resource Deployment using Terraform and GitHub Actions
dev.to
To view or add a comment, sign in
Analytics and AI at Gigasheet
3moAwesome to see!