If you're running Apache Spark, you're wasting resources at the application level. And that means you’re wasting money 😫 Read this blog to learn how Pepperdata eliminates that waste on top of all the infrastructure-level optimizations that you may have already implemented 💰 https://lnkd.in/g7qT_EHU #capacityoptimization | #amazonemr | #amazoneks | #kubernetes | #k8s | #finops | #apachespark | AWS Partners
Pepperdata’s Post
More Relevant Posts
-
If you're running Apache Spark, you're wasting resources at the application level. And that means you’re wasting money. Read this blog to learn how Pepperdata eliminates that waste on top of all the infrastructure-level optimizations that you may have already implemented 💰 https://lnkd.in/g7qT_EHU #capacityoptimization | #amazonemr | #amazoneks | #kubernetes | #k8s | #finops | #apachespark | AWS Partners
You Can Solve the Application Waste Problem
https://www.pepperdata.com
To view or add a comment, sign in
-
Here's a guide delving into workload-specific garbage collection in Kubernetes - offering insights and practical steps to optimize it for your clusters 🚀 #Kubernetes #DevOps #Cloud
Kubernetes Garbage Collection: A Practical Guide
overcast.blog
To view or add a comment, sign in
-
If you’ve chosen to run you application in Kubernetes on AWS using the Elastic Kubernetes Service (EKS) you will want to setup monitoring tools to keep an eye on everything. There are lots of open source tools available and most can be setup rather easily in your cluster. You will have to maintain these and mange the compute and storage for them. If you want to use these tools (like Grafana and Prometheus) you can also take advantage of the managed versions of them and leave most of the work to AWS. You will have a cost but it will be up to you to decide if it’s worth it. This article from Siva Guruvareddiar and Michael Hausenblas shows how to set these up with your EKS cluster. https://lnkd.in/eWNSnNxF
Enhancing observability with a managed monitoring solution for Amazon EKS | Amazon Web Services
aws.amazon.com
To view or add a comment, sign in
-
Node Overhead: The Hidden Cost Eating Your Kubernetes Spend https://lnkd.in/gfTxqiYr Kubernetes node overhead is a largely unrecognized “cost of doing business” for teams using Kubernetes. It can be defined as the node resources used to run Kubernetes itself. All Kubernetes nodes have capacity: the consumption of these resources is the sticker price that providers like Amazon Web Services and the Google Cloud Platform bill for. A subset of that total capacity is defined as allocatable capacity; this is the portion of the node that workloads can actually be scheduled on. Capacity is what you pay for, allocatable capacity is what you can use and the difference is node overhead. That overhead includes the kubelet, any control plane infrastructure running on the node, the container runtime (Docker, containerd) and, in general, any software running directly on the node that isn’t in a pod. Node overhead does not include things like Prometheus, Calico/Weave/CNI pods, DNS pods, Cert Manager, kube-system or any pods running in Kubernetes. Calculating Node Overhead With kubectl describe or any Kubernetes API request that tells you about the nodes in your cluster, you can immediately see the node’s capacity and its allocatable capacity; the difference between them represents the node overhead. This example (running an N1-standard-2 Kubernetes node) displays the capacity block and the allocatable block. You can compare the 2 CPU capacity (the sticker price) versus the allocatable 1930m. That’s a relatively small amount of overhead. However, looking at the memory, there’s a stark difference: 7.6GB of capacity versus 5.7GB allocatable, which is a more substantial amount of overhead. Turning to Open Source for Cost Visibility OpenCost is an open source Cloud Native Computing Foundation (CNCF) sandbox project, with contributors including Microsoft, Kubecost, Adobe, SUSE and many others. The REST API combines information from kubectl describe with actual node cost information. To calculate node overhead within OpenCost itself, you can surface standard kube-state metrics on capacity and allocatable CPU and memory. Those metrics can be exposed to Prometheus to collect and store data and to then provide querying and aggregating functions. OpenCost collects query results from Prometheus, calculates the node CPU/memory used for overhead (both in the units of bytes/vCPU, and also as a percentage) and then provides a singular cost-weighted average metric as a final summary showing the fraction of costs spent on Kubernetes overhead. Understanding Kubernetes Overhead Using these relatively straightforward operations on widely available Prometheus metrics, you can better understand Kubernetes overhead—including how overhead changes as node size increases, the node family changes and what differences there are across cloud providers. That understanding is crucial to accurately sizing clusters and informing other key cost-efficiency decisions....
To view or add a comment, sign in
-
If you’ve chosen to run you application in Kubernetes on AWS using the Elastic Kubernetes Service (EKS) you will want to setup monitoring tools to keep an eye on everything. There are lots of open source tools available and most can be setup rather easily in your cluster. You will have to maintain these and mange the compute and storage for them. If you want to use these tools (like Grafana and Prometheus) you can also take advantage of the managed versions of them and leave most of the work to AWS. You will have a cost but it will be up to you to decide if it’s worth it. This article from Siva Guruvareddiar and Michael Hausenblas shows how to set these up with your EKS cluster. https://lnkd.in/eWNSnNxF
Enhancing observability with a managed monitoring solution for Amazon EKS | Amazon Web Services
aws.amazon.com
To view or add a comment, sign in
-
📢 𝗘𝗞𝗦 𝗔𝘂𝘁𝗼 𝗠𝗼𝗱𝗲: 𝗔 𝗚𝗮𝗺𝗲-𝗖𝗵𝗮𝗻𝗴𝗲𝗿 𝗳𝗼𝗿 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁!! In the ongoing re:Invent, AWS announced Amazon EKS Auto Mode. This new capability automates many of the tasks involved in managing Kubernetes clusters, solving the complexities of production-grade Kubernetes applications. It promises to improve performance, reduce overhead, and simplify application deployment and management on EKS. 𝗘𝗞𝗦 𝗔𝘂𝘁𝗼 𝗠𝗼𝗱𝗲 𝗲𝗻𝗮𝗯𝗹𝗲𝘀 𝘁𝗵𝗲 𝗳𝗼𝗹𝗹𝗼𝘄𝗶𝗻𝗴 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗰𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 𝗶𝗻 𝘆𝗼𝘂𝗿 𝗘𝗞𝗦 𝗰𝗹𝘂𝘀𝘁𝗲𝗿: ☑️ Compute auto scaling and management ☑️ Application load balancing management ☑️ Pod and service networking and network policies ☑️ Cluster DNS and GPU support ☑️ Block storage volume support 💲𝗞𝗲𝘆 𝗯𝗲𝗻𝗲𝗳𝗶𝘁𝘀 𝗼𝗳 𝘂𝘀𝗶𝗻𝗴 𝗘𝗞𝗦 𝗔𝘂𝘁𝗼 𝗠𝗼𝗱𝗲: ➡️ 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱 𝗰𝗹𝘂𝘀𝘁𝗲𝗿 𝗺𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁: Automates tasks like node provisioning, scaling, and security patch application. ➕ 𝗜𝗺𝗽𝗿𝗼𝘃𝗲𝗱 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲: Optimizes resource utilization and applies performance best practices. ➕ 𝗥𝗲𝗱𝘂𝗰𝗲𝗱 𝗼𝘃𝗲𝗿𝗵𝗲𝗮𝗱: Simplifies management and automates tasks. ➕ 𝗘𝗮𝘀𝗶𝗲𝗿 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 𝗮𝗻𝗱 𝗺𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 𝗼𝗳 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀: Provides a simplified experience and automates deployment tasks. If you're looking to streamline Kubernetes cluster management and enhance application performance, EKS Auto Mode is worth considering. I'll be exploring this further and sharing my experiences. #EKS #Kubernetes #AWS #reInvent #CloudNative #DevOps #Automation #Serverless https://lnkd.in/gmxBDUJC
Streamline Kubernetes cluster management with new Amazon EKS Auto Mode | Amazon Web Services
aws.amazon.com
To view or add a comment, sign in
-
Hey everyone, I'm excited to share my latest blog post on managing AWS Lambda functions! 🚀 In this post, I dive into the importance of monitoring, logging, and debugging to keep your Lambda functions running smoothly and efficiently. Whether you're dealing with common issues or looking to optimize performance, this guide has got you covered. 🔍 Key Highlights: - Setting up AWS CloudWatch for real-time monitoring. - Effective logging practices with CloudWatch Logs. - Debugging tips using AWS X-Ray and local debugging with AWS SAM CLI. - Best practices for error handling, resource optimization, and security. Read the blog here: https://lnkd.in/grz4UJDN I’d love to hear your thoughts and experiences with managing AWS Lambda functions. Leave a comment below and let’s discuss! Also, don't forget to visit my blog for more content like this. Happy serverless computing! 💻 #AWS #Lambda #CloudComputing #Serverless #TechBlog
Managing AWS Lambda Functions: Monitoring, Logging, and Debugging
infinity-creator.blogspot.com
To view or add a comment, sign in
-
We have solved the Multi log problem in Amazon Web Services (AWS) EKS 👬 What is a Multi log Problem? 🤯 An application can have multiple log streams for different purposes, such as general stdout logs, access logs, and audit logs. Furthermore, different applications might create different logs streams of their own that suit the application’s observability needs. Each log stream has its own format, frequency, and permissions configurations, but we still need to collect these logs and push them to our log aggregation engine. The main challenge is that these logs are located on disposable Pod storage and cannot be accessed or streamed in the same way as the default log streams from stdout/err. In a common log architecture, you use a DaemonSet that runs the collector agent to stream the logs to the central log agent. However, working with this architecture can prove challenging when each application can save different log types (format, write frequency, etc.). The operation team would have to know when each new log type is added and understand from the application developers how to handle and parse it. So how do we solve this ❓ 😟 a. sidecar ❌ : Resources used by the sidecars are duplicated, meaning if you have 100 Pods you need 100 sidecars, each consuming its own resources (RAM, CPU). b.Collect logs in PVC❌: PVC per namespace can be expensive and inefficient c. Sharing scalable volume across namespaces❌: Using objects other than the official CSI supported objects is a challenge Solution: ⬇ https://lnkd.in/dEZgzftF #AWS #EKS #kubernetes #K8s #NetApp #FSx
How to optimize log management for Amazon EKS with Amazon FSx for NetApp ONTAP | Amazon Web Services
aws.amazon.com
To view or add a comment, sign in
-
Learn how to scale 6WIND VSR on AWS. 6WIND VSR software is capable of reaching higher throughput thanks to its unique architecture offering linear scaling with the number of vCPUs allocated to the data plane. Read more:
Amazon Web Services (AWS) – Scaling on Public Cloud
https://www.6wind.com
To view or add a comment, sign in
-
Hi guys! One more post around here. In our exercise today, we will create a method of reducing costs without impacting the operation. The operation needs to be operational 24 hours a day, 7 days a week. However, during the night shift, it is not necessary to keep the same computing resources running. Hence, we need to create an automation that turns off the resources at a certain time, reduces the computing resource, and at dawn the machine will be turned off again and the automation will return with the morning shift resources. I hope you can like, comment, and share it if possible. https://lnkd.in/di_3yzmq #aws #communitybuilder #finops #cloudoperations
Optimization and Automation of AWS Resources to reduce costs without impacting operations
dev.to
To view or add a comment, sign in
4,074 followers