Istio 1.23 Drops the Sidecars for a Simpler ‘Ambient Mesh’ https://lnkd.in/gfTxqiYr Louis Ryan, CTO, Solo.io The new release of the open source Istio service mesh software offers a potentially big change in how to handle Kubernetes traffic, with the introduction of an ambient mesh option. Although the technology has been offered as an experimental feature for several releases, the core development team taking feedback from users, this is the first release to offer the feature as a production-grade capability. It’s a new architecture entirely, explained Louis Ryan, who is the CTO of commercial Istio provider Solo.io, as well as a member of Istio Technical Oversight Committee and Steering Committee, in a TNS interview. “It’s cheaper, faster, easier to deploy more scalable, better.” An “ambient” service mesh is one that, unlike traditional approaches, does not require a separate sidecar to accompany each application. Istio is a project of the Cloud Native Computing Foundation, making it a building block of many Kubernetes deployments. Istio Sans Sidecar The sidecar was a necessary byproduct of microservices architecture, noted Idit Levine, founder and CEO of Solo.io. Once applications are decomposed into individual services, these services require a way to communicate. Hence it made sense to festoon each service with a sidecar to handle all the networking traffic. The sidecar provides security, improved reliability, and dynamic networking capabilities for each application. Sidecars solved “a real problem,” Levine noted. The sidecar provided the functionality, but the designer “overlooked” how much overhead they would bring to the machine itself. In contrast, the ambient approach “is reducing costs because there is no sidecar everywhere. But it’s still giving you the security that you’re looking for, and all the functionality,” Levine said. “So it’s actually really amazing.” Solo.io engineers have been working on refining the ambient approach for several years now. How Ambient Mesh Works “This innovative approach makes networking in Kubernetes even easier. No more extra steps with sidecars. Services can now communicate more directly and simply,” wrote AWS community builder Seifeddine Rajhi, in a post explaining the technology. Ambient Mesh is built on a zero trust architecture. The ztunnel zero trust tunnel, a daemonset pod written in Rust, is installed on each cluster to handle Layer 3 and Layer 4 traffic. Then, the Envoy-based Waypoint Proxies to handle more complex Layer 7 traffic for each namespace, Rajhi explained So, as an example, for 50 pods, with each pod only receiving 50 requests per second, you can have a single proxy handle all of them. “That’s a massive resource savings,” Ryan pointed out. There are other advantages as well. Upgrades are a lot easier, as applications do not need to be taken offline to assign a sidecar. Instead, updates of the daemonsets can be done on a rolling basis. This is where the “ambien...
Ziaul Kamal’s Post
More Relevant Posts
-
Is your startup considering serverless architecture? Our article dives deep into the key requirements and considerations when moving to a serverless setup. Learn about the infrastructure, scalability, and best practices that can help your business thrive. 🔗 Read the full article here: https://lnkd.in/es6sDnbu #Serverless #Architecture #TechForStartups #Scalability #DevOps #TechBlog #WebDevelopment #CI_CD #AtlasDevStudio
To view or add a comment, sign in
-
🚀 Unlock the Power of Microservices with Node.js and API Gateway! In today’s world of scalable and modular applications, microservices architecture is a game-changer. But as services grow, managing inter-service communication, routing, and security becomes increasingly complex. That’s where an API Gateway steps in to save the day! 🌐 I recently explored how Node.js—with its lightweight and event-driven architecture—makes for a perfect foundation to build a robust API Gateway. Whether it’s handling request routing, load balancing, or ensuring secure communication, Node.js provides the tools to streamline microservice interactions. 💡 Key takeaways from my journey: Simplify client access with a single gateway for all your microservices. Enhance security using features like authentication, rate limiting, and protocol translation. Achieve scalability and resilience with intelligent traffic management. 🔧 In the blog, I share: A step-by-step guide to building an API Gateway with Node.js. Practical code examples for setting up microservices and routing. Tips to implement advanced features like monitoring and service discovery. 🎯 If you’re working with microservices or planning to adopt them, understanding API Gateways is essential for creating scalable, secure, and maintainable systems. Read the full blog here: https://lnkd.in/e4WGQBWb Let’s discuss—how are you managing microservices in your architecture? 👇 #NodeJS #APIGateway #Microservices #WebDevelopment #Scalability #TechInnovation
API Gateway and Microservices with Node.js: A Comprehensive Guide | Ram's Tech Blog
devram.vercel.app
To view or add a comment, sign in
-
Couldn't agree more 👍 Serverless is a large paradigm shift and needs to be adopted step by step. It doesn't require having a microservice architecture where everything is neatly split up and perfectly aligned with DDD from the get-go. Another great write-up from Yan Cui at THEBURNINGMONK Limited. https://lnkd.in/duKJ-45n #Serverless #CloudNativeCitizen #DomainDrivenDesign
I'm sorry, but the way you adopt serverless is wrong
theburningmonk.com
To view or add a comment, sign in
-
#𝗗𝗮𝘆𝟯𝟬 𝗧𝗮𝘀𝗸: 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲😀 #90daysofdevops I have already posted some useful terms for Kubernetes you can check the previous post, this post is regarding detailed Kubernetes architecture. 𝗤. 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀? 𝗪𝗿𝗶𝘁𝗲 𝗶𝗻 𝘆𝗼𝘂𝗿 𝗼𝘄𝗻 𝘄𝗼𝗿𝗱𝘀 𝗮𝗻𝗱 𝘄𝗵𝘆 𝗱𝗼 𝘄𝗲 𝗰𝗮𝗹𝗹 𝗶𝘁 𝗸𝟴𝘀? Kubernetes is an open-source container management tool that automates container deployment, container scaling and load balancing. It schedules, runs and manages isolated containers which are running on virtual/physical/cloud machines. Kubernetes supports almost all container platforms. 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗶𝘀 𝗮𝗹𝘀𝗼 𝗰𝗮𝗹𝗹𝗲𝗱 𝗞𝟴𝘀 𝗯𝗲𝗰𝗮𝘂𝘀𝗲, in the past, names were often shortened for convenience. The "K" is the first letter, and "S" is the last letter, with the number 8 representing the eight letters in between, making it easier to refer to. 𝗤. 𝗪𝗵𝗮𝘁 𝗮𝗿𝗲 𝘁𝗵𝗲 𝗯𝗲𝗻𝗲𝗳𝗶𝘁𝘀 𝗼𝗳 𝘂𝘀𝗶𝗻𝗴 𝗞𝟴𝘀? Kubernetes provides 𝗯𝗼𝘁𝗵 𝗵𝗼𝗿𝗶𝘇𝗼𝗻𝘁𝗮𝗹 and 𝘃𝗲𝗿𝘁𝗶𝗰𝗮𝗹 𝘀𝗰𝗮𝗹𝗶𝗻𝗴, Automatically scale applications up or down based on demand. It also offers strong fault tolerance; for example, if a pod fails, Kubernetes automatically creates a new one. Kubernetes includes 𝗵𝗲𝗮𝗹𝘁𝗵 𝗺𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 for containers, ensuring that if a container fails, a new one is created (although this may require a plugin). 𝗙𝗮𝘂𝗹𝘁 𝗧𝗼𝗹𝗲𝗿𝗮𝗻𝗰𝗲: If a pod fails, Kubernetes ensures that a new one is created to replace it. 𝗤. 𝗘𝘅𝗽𝗹𝗮𝗶𝗻 𝘁𝗵𝗲 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗼𝗳 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗖𝗼𝗻𝘁𝗿𝗼𝗹 𝗣𝗹𝗮𝗻𝗲: Manages the cluster and makes decisions about scheduling and maintaining the desired state of the cluster. 𝗪𝗼𝗿𝗸𝗲𝗿 𝗡𝗼𝗱𝗲𝘀: Where the actual applications run, managed by the control plane. 𝗤. 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗖𝗼𝗻𝘁𝗿𝗼𝗹 𝗣𝗹𝗮𝗻𝗲? The Control Plane is the brain of a Kubernetes cluster, responsible for the global management and decision-making of the cluster. It helps to continuously monitors the cluster's state and makes adjustments as needed to ensure that the desired state of the application is maintained. 𝗤. 𝗪𝗿𝗶𝘁𝗲 𝘁𝗵𝗲 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗯𝗲𝘁𝘄𝗲𝗲𝗻 𝗸𝘂𝗯𝗲𝗰𝘁𝗹 𝗮𝗻𝗱 𝗸𝘂𝗯𝗲𝗹𝗲𝘁𝘀. 𝗞𝘂𝗯𝗲𝗰𝘁𝗹: is a CLI tool for interacting with k8s cluster. Kubectl is like CEO. Primary function of kubectl is to interact with Kube API Server to issue command , retrieve info and perform operation on kubeAPI Server. Helps to create, update, delete and scaling resource like pods, deplopyments and services. 𝗞𝘂𝗯𝗲𝗹𝗲𝘁𝘀: Agent talks with API and gather information given to API. Kubelet us ab agent running on node. Kubelet keeps the communication with controller manager. Also sends the success and failure report to master node. It listens to Kubernetes master (eg. multiple pod creation) Check out detailed blog: https://lnkd.in/g9pwqpyE #k8s #kubelets #kubernetes #trainwithshubham Shubham Londhe
To view or add a comment, sign in
-
Kubernetes has rapidly become a cornerstone in cloud-native architecture, and for good reason. It ensures automatic scaling, allowing applications to dynamically adjust resources based on traffic or demand, minimizing costs and maximizing performance. The Horizontal Pod Autoscaler is a prime example, automatically scaling the number of Pods based on CPU, memory, or custom metrics. Another key feature is self-healing, where Kubernetes actively monitors the health of containers and automatically replaces or restarts any that fail, ensuring minimal downtime and increased reliability. Kubernetes also excels in secure service discovery. By leveraging built-in DNS-based service discovery, it allows microservices within the cluster to find and communicate with each other efficiently and securely without manual configurations. Moreover, with integrations like Knative, Kubernetes extends these capabilities to support serverless workloads. Knative allows applications to scale down to zero when idle, saving infrastructure costs by only consuming resources when actively handling requests. This is ideal for workloads with intermittent traffic, offering a seamless and cost-effective serverless environment on top of Kubernetes. https://lnkd.in/dpTJWRvi
Scaling Up With Kubernetes: Cloud-Native Architecture for Modern Applications - DZone
dzone.com
To view or add a comment, sign in
-
Feature Flags and Canary Releases in Microservices https://lnkd.in/d7uXRn3r Feature flags are commonly used constructs and have been there for a while. But in the last few years, things have evolved and feature flags are playing a major role in delivering continuous risk-free releases. In general, when a new feature is not fully developed and we still want to branch off a release from the mainstream, we can hide our new feature and toggle it off in production. Another use-case is when we want to release our feature to only a small percentage of users, we set the feature 'on' for a segment/geography and set it 'off' for the rest of the world. The capability to toggle a feature on and off without doing a source code change gives the developer an extra edge to experiment with conflicting features with live traffic. Let us deep dive into more details about feature flags and an example implementation in Springboot. Things To Consider When We Are Introducing a New Feature Flag Establish a consistent naming convention across applications, to make the purpose of the feature flags easily understandable by other developers and product teams. Where to maintain feature flags? In the application property file: Toggle features based on environment. Useful for experimenting in development while keeping features off in production. In configuration server or vault: Let's imagine you are tired after a late-night release, and your ops team calls you at 4 am, to inform you the new feature is creating red alerts everywhere in monitoring tools, here comes the Feature toggle to your rescue. First, turn the feature 'off' in the config server and restart compute pods alone, In database or cache: Reading configs or flag values from a database or an external cache system like Redis, you don't have to redeploy or restart your compute, as the values can be dynamically read from the source at regular intervals, pods get updated value without any restart. You can also explore open-source or third-party SDKs built for feature flags, a handful of them are already in the market. They also come with additional advantages that help in the lifecycle management of feature flags.
To view or add a comment, sign in
-
Microservices architecture has been a significant trend in software development over the past decade. This approach, which involves breaking down applications into smaller, loosely coupled services, has promised greater scalability, flexibility, and ease of maintenance compared to traditional monolithic architectures. However, as we step into 2024, are Microservices still relevant? Check out this blog : https://lnkd.in/efF36QgM #cloudnative #microservices #appmodernisation #kubernetes #containers
Are Microservices still relevant in 2024 - CloudMelon Vis
cloudmelonvision.com
To view or add a comment, sign in
-
Some battle-tested, no-nonsense tips for succeeding with microservices. [1.] 𝐃𝐚𝐭𝐚𝐛𝐚𝐬𝐞 𝐩𝐞𝐫 𝐒𝐞𝐫𝐯𝐢𝐜𝐞 (𝐀𝐥𝐦𝐨𝐬𝐭 𝐀𝐥𝐰𝐚𝐲𝐬) This isn't just about loose coupling =>it's about preventing data corruption and ensuring that each service can evolve its data model independently. The exception? When you have services that need to share reference data (e.g. country codes), but even then, consider replication or a shared library. [2.] 𝐂𝐢𝐫𝐜𝐮𝐢𝐭 𝐁𝐫𝐞𝐚𝐤𝐞𝐫𝐬 𝐀𝐫𝐞𝐧'𝐭 𝐎𝐩𝐭𝐢𝐨𝐧𝐚𝐥 Network failures happen. Services go down. Without circuit breakers, one failing service can cascade failures throughout your system. Use libraries like Hystrix or Resilience4j to prevent this and gracefully degrade functionality when things go wrong. [3.] 𝐎𝐛𝐬𝐞𝐫𝐯𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐢𝐬 𝐘𝐨𝐮𝐫 𝐋𝐢𝐟𝐞𝐥𝐢𝐧𝐞 Logs, metrics and distributed tracing are essential. You need to know what's happening inside your system, how requests flow and where bottlenecks are. Tools like Prometheus, Grafana and Jaeger are your friends. [4.] 𝐕𝐞𝐫𝐬𝐢𝐨𝐧 𝐘𝐨𝐮𝐫 𝐀𝐏𝐈𝐬, 𝐁𝐮𝐭 𝐁𝐞 𝐒𝐦𝐚𝐫𝐭 Don't break clients with every change. Use versioning to introduce new features or changes gradually. Semantic versioning is a good starting point, but don't be afraid to have multiple versions running concurrently if needed. [5.] 𝐃𝐨𝐧'𝐭 𝐑𝐞𝐢𝐧𝐯𝐞𝐧𝐭 𝐭𝐡𝐞 𝐖𝐡𝐞𝐞𝐥 There are tons of great libraries and frameworks for building microservices. Use them! Spring Boot, Micronaut, Quarkus and others provide a solid foundation and take care of a lot of the boilerplate. [6.] 𝐒𝐭𝐚𝐭𝐞𝐥𝐞𝐬𝐬 𝐒𝐞𝐫𝐯𝐢𝐜𝐞𝐬 𝐚𝐫𝐞 𝐄𝐚𝐬𝐢𝐞𝐫 𝐭𝐨 𝐒𝐜𝐚𝐥𝐞 If your services don't hold any state, they're much easier to scale horizontally. This means you can add or remove instances as needed to handle varying loads. [7.] 𝐀𝐬𝐲𝐧𝐜 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐢𝐬 𝐘𝐨𝐮𝐫 𝐅𝐫𝐢𝐞𝐧𝐝 Use message queues or event buses to decouple services and make them more resilient to failures. This can also improve performance by allowing services to process requests asynchronously. [8.] 𝐒𝐭𝐚𝐫𝐭 𝐒𝐦𝐚𝐥𝐥, 𝐈𝐭𝐞𝐫𝐚𝐭𝐞 𝐅𝐚𝐬𝐭 Don't try to build a perfect microservices architecture from day one. Start with a few services, learn from your mistakes and gradually add more as you gain experience. Or the best, Start with Modular Monolith. *** Don't forget about the Security part. #microservices #softwaredevelopment
To view or add a comment, sign in
-
Just finished reading the book Building Microservices by Sam Newman. I expected it to be a hard sell on microservices but found an unbiased encyclopedia of software architecture wisdom and best practices. Sam presents multiple options for each topic, detailing the benefits and limitations of each. Surprisingly, the book even defends monoliths in some cases, despite its title, "Building Microservices." This balanced approach makes perfect sense. I compiled a 10-page document of highlighted sentences from each chapter, much shorter than the book's 562 pages, offering a snapshot of its insights and best practices. https://lnkd.in/gKzb96bs Here are my top 10 quotes from the book to give you a taste. “Code that changes together, stays together.” “Ubiquitous language refers to the idea that we should strive to use the same terms in our code as the users use. We must take into account the existing organizational structure when considering where and when to define boundaries, and in some situations we should perhaps even consider changing the organizational structure to support the architecture we want.” “I think it’s important to talk first about what we want, and only then look for the right technology to implement that.” “The first of the fallacies of distributed computing is ‘The network is reliable’. Networks aren’t reliable. They can and will fail.” “If you are constantly making changes across multiple microservices, it’s likely that your microservice boundaries are in the wrong place. It may be worth considering merging microservices back together if you spot this happening.” “If what you are currently doing works for you, then keep doing it! Don’t let fashion dictate your technical decisions.” “When we detect flaky tests, it is essential that we do our best to remove them. Otherwise, we start to lose faith in a test suite that “always fails like that.” A test suite with flaky tests can become a victim of what Diane Vaughan calls the normalization of deviance—the idea that over time we can become so accustomed to things being wrong that we start to accept them as being normal and not a problem.” “A planned outage is much easier to deal with than an unplanned one. Many organizations put processes and controls in place to try to stop failure from occurring but put little to no thought into actually making it easier to recover from failure in the first place.” “One way to scale for resilience is to ensure that you don’t put all your eggs in one basket.” “If it’s true that one person making a mistake can bring an entire company to its knees, you’d think that would say more about the company than the individual.” “Be careful about caching in too many places! The more caches between you and the source of fresh data, the more stale the data can be, and the harder it can be to determine the freshness of the data that a client eventually sees.”
To view or add a comment, sign in