Using Change Data Capture (CDC) with Debezium to stream events from the NoSQL MongoDB database to the Kafka messaging broker: https://lnkd.in/eVJEwmig Head over to Lydtech Consulting at https://lnkd.in/ds2fMa_N for this and many more articles by the team on #Kafka and other interesting areas of software development. Lydtech Consulting is a software consultancy specialising in Kafka and Architectural resiliency.
Lydtech Consulting’s Post
More Relevant Posts
-
New propeldata.com engineering blog post just dropped → https://lnkd.in/giQ4tH8B Learn how you can leverage Propel's API to build analytics with real-time data coming from Postgres via Kafka. Includes step-by-step code examples and CLI commands to set everything up. 🤩 Including: ❖ How PostgreSQL CDC with Debezium works ❖ What is the Propel Kafka Datapool? ❖ Setting up Minikube, PostgreSQL, Kafka, Kafka Connect, and Debezium ❖ How to create a Propel Kafka Data Pool ❖ How to query the data via API ❖ Visualize the data
How to stream PostgreSQL CDC to Kafka and use Propel to get an instant API
propeldata.com
To view or add a comment, sign in
-
New propeldata.com engineering blog post just dropped → https://lnkd.in/gBWpkJNt Learn how you can leverage Propel's API to build analytics with real-time data coming from Postgres via Kafka. Includes step-by-step code examples and CLI commands to set everything up. 🤩 Including: ❖ How PostgreSQL CDC with Debezium works ❖ What is the Propel Kafka Datapool? ❖ Setting up Minikube, PostgreSQL, Kafka, Kafka Connect, and Debezium ❖ How to create a Propel Kafka Data Pool ❖ How to query the data via API ❖ Visualize the data
How to stream PostgreSQL CDC to Kafka and use Propel to get an instant API
propeldata.com
To view or add a comment, sign in
-
🔀Real-Time Data Replication from Postgres to Apache Iceberg made simple with RisingWave! No need for complex multi-system stacks like Kafka, Flink, or Debezium. With RisingWave's native Postgres CDC and Iceberg connectors, you can set up seamless data pipelines using just a few SQL queries. Why it matters: ✅ Enable real-time analytics, machine learning, and faster insights. ✅ Eliminate batch processing delays. ✅ Simplify your data stack for improved efficiency. Ready to simplify your data pipelines? 🌊🧊 👉 Read the full blog by Heng Ma : https://lnkd.in/dbsBWDFu 🚨 Join our slack for more : go.risingwave.com/slack #DataStreaming #Postgres #ApacheIceberg #RealTimeAnalytics #Kafka
Continuous Data Replication from Postgres to Iceberg Using RisingWave
risingwave.com
To view or add a comment, sign in
-
Learn two ways to move data from PostgreSQL to Kafka – a manual method and an automated method, using BryteFlow as a Postgres Kafka connector to Change Data Capture data from Postgres to Kafka. Read More: https://lnkd.in/gN5BWpKa #postgrescdckafka #postgrestokafka #postgreskafkareplication #postgrestokafkamigration
How to Make PostgreSQL CDC to Kafka Easy (2 Methods) | BryteFlow
https://bryteflow.com
To view or add a comment, sign in
-
🚧 Migrating Business Functionality Out of a Monolithic Application: A Journey with Strangler Fig Pattern 🚧 Migrating functionality from a monolith can feel like navigating a road full of potholes. The key is prioritizing safety over speed, and one of the safest approaches is the Strangler Fig pattern. But how do you execute it effectively? One method we used involved Change Data Capture (CDC) and Kafka to streamline the migration. Here’s how we made it work: ✅ Starting Point: Our monolithic application supported various features and stored all data in a MySQL database. ✅ Feature Extraction: We extracted Feature A into a separate service (Service A) with a new MongoDB database. ✅ CDC Workflow: We built a CDC workflow using Debezium to move data from MySQL to MongoDB via Kafka. ✅ Routing with Nginx: We placed a proxy (Nginx) to route read requests to Service A, while all other requests, including writes for Feature A, continued to go to the monolith. ✅ Read-First Approach: After validating the read results from Service A, we gradually migrated the write operations. ✅ Final Outcome: Feature A was fully decoupled from the monolith, with all requests now going directly to Service A. A couple of key points to consider: 👉 Why Kafka? Kafka offered several advantages, such as: Decoupling the monolith and the new service Ensuring message ordering Seamless integration with Debezium 👉 Why Start with Reads? This was a critical system, and we wanted to minimize risk. Starting with reads allowed the team to gain experience with the new architecture and compare responses between the monolith and the new service. Our mantra throughout this process was safety first—ensuring a smooth transition without disrupting business operations. 🔍 What are your thoughts on the Strangler Fig pattern? Have you used it in your migrations? Follow: Hamza Ali Khalid #Microservices #StranglerFig #MonolithMigration #Kafka #Debezium #DataArchitecture #SoftwareEngineering #TechStrategy
To view or add a comment, sign in
-
🚧 Migrating Business Functionality Out of a Monolithic Application: A Journey with Strangler Fig Pattern 🚧 Migrating functionality from a monolith can feel like navigating a road full of potholes. The key is prioritizing safety over speed, and one of the safest approaches is the Strangler Fig pattern. But how do you execute it effectively? One method we used involved Change Data Capture (CDC) and Kafka to streamline the migration. Here’s how we made it work: ✅ Starting Point: Our monolithic application supported various features and stored all data in a MySQL database. ✅ Feature Extraction: We extracted Feature A into a separate service (Service A) with a new MongoDB database. ✅ CDC Workflow: We built a CDC workflow using Debezium to move data from MySQL to MongoDB via Kafka. ✅ Routing with Nginx: We placed a proxy (Nginx) to route read requests to Service A, while all other requests, including writes for Feature A, continued to go to the monolith. ✅ Read-First Approach: After validating the read results from Service A, we gradually migrated the write operations. ✅ Final Outcome: Feature A was fully decoupled from the monolith, with all requests now going directly to Service A. A couple of key points to consider: 👉 Why Kafka? Kafka offered several advantages, such as: Decoupling the monolith and the new service Ensuring message ordering Seamless integration with Debezium 👉 Why Start with Reads? This was a critical system, and we wanted to minimize risk. Starting with reads allowed the team to gain experience with the new architecture and compare responses between the monolith and the new service. Our mantra throughout this process was safety first—ensuring a smooth transition without disrupting business operations. 🔍 What are your thoughts on the Strangler Fig pattern? Have you used it in your migrations? Follow: Hamza Ali Khalid #Microservices #StranglerFig #MonolithMigration #Kafka #Debezium #DataArchitecture #SoftwareEngineering #TechStrategy
To view or add a comment, sign in
-
Are you looking to dive deep into the world of real-time data processing? Apache Kafka might just be the game-changer you need! Our latest blog post covers everything you need to know about Kafka's architecture, functionalities, and real-world use cases. Whether you’re a data engineer, a software developer, or a tech enthusiast, this comprehensive guide will help you grasp the full potential of Kafka in managing your data streams. Check out the full blog here 👉
All You Need to Know About Apache Kafka Architecture
https://www.ksolves.com
To view or add a comment, sign in
-
I’m thrilled to share my very first Medium blog on integrating PostgreSQL with Elasticsearch using Kafka Connect. In this hands-on guide, I walk you through the setup, configuration, and validation of data flows using Docker and Kafka Connect. This is just the beginning—many more posts to come! 🎉 If you're interested in data integration, real-time analytics, or Kafka, feel free to check it out and let me know your thoughts! 👉 Don’t forget to follow me on Medium and clap for the article if you find it helpful! Check it out here: https://lnkd.in/gy4QHXMA #FirstBlog #Kafka #PostgreSQL #Elasticsearch #DataIntegration #Docker #TechJourney
Seamless Data Transfer from Postgres to ElasticSearch with Kafka Connect: A Hands-on Guide
medium.com
To view or add a comment, sign in
-
Ready to take your Postgres replication to the next level? Check out this blog by Phil Eaton, EDB Staff Software Engineer, where he shows you how EDB Postgres Distributed simplifies achieving production-grade replication. From handling rolling upgrades and DDL changes to ensuring data consistency across multiple nodes, EDB Postgres Distributed makes it all seamless. Whether you're managing a small database or an enterprise-scale deployment, this guide has the information you need to achieve high-availability and efficiency. Read Phil's technical blog here ➡️ https://bit.ly/3WzvUFR hashtag #PostgreSQL hashtag #EDBPostgresAI hashtag #JustSolveItWithPostgres hashtag #DatabaseManagement hashtag #HighAvailability hashtag #DataReplication
Delightful, production-grade replication for Postgres
enterprisedb.com
To view or add a comment, sign in
196 followers