Check out the cutest Kubernetes release blog by KodeKloud! 🐧✨ Here are the enhancements: 🖥️ Container Resource based Pod Autoscaling (Stable) 🛠️ Introduction of Structured Parameters for Dynamic Resource Allocation (Alpha) 🧠 Node Memory Swap Support (Beta) 🔐 Structured Authentication Configuration (Beta) 👤 User Namespaces in Pods (Beta) 🚀 Speed Up Recursive SELinux Label Change (Beta) ✅ Job Success/Completion Policy (Alpha) 🎮 Interactive Flag Added to kubectl Delete Command (Stable) 🔄 Routing Preferences for Services (Alpha) Join the Kubernetes fun! 🎉 #Kubernetes #Release #1.30 #Uwubernetes https://lnkd.in/gx6fFbA5
Nimesha Jinarajadasa’s Post
More Relevant Posts
-
⎈ If you and your team are using #Kubernetes or thinking about using Kubernetes for AI and ML workloads, listen up! ⎈ Kubernetes v1.31 was announced this week, and it’s a particularly notable release for several reasons, especially for those working with AI and ML workloads. Here are the highlights: ⚙️ New DRA APIs – DRA stands for dynamic resource allocation. These new APIs introduce structured parameters, making resource management more transparent and efficient. They support features like cluster autoscaling and allow for version skew between kubelet and control plane. Classic DRA is now managed via a dedicated control plane controller, enabling custom allocation policies. 📦 Support for OCI Image Volumes – Kubernetes is stepping up for AI/ML workloads by allowing OCI-compatible images to be used directly as volumes in Pods! This simplifies the integration of OCI standards, making it easier to store and distribute content via OCI registries. Enable the "ImageVolume" feature gate to start using this powerful capability. 🩺 Device Health Reporting in Pod Status – This new feature allows teams to expose device health information directly through the Pod status. By adding the "allocatedResourcesStatus" field, you can gain critical insights into the health of devices assigned to containers, improving operational visibility and troubleshooting. Speaking from experience, working with varied hardware on Kubernetes has been a pain. These updates, all in alpha, are starting to pave the way for a much better developer and operator experience when your teams need to run varied workloads on Kubernetes that leverage CPUs, GPUs, and other types of hardware. For more on the release, here’s the announcement post. https://lnkd.in/ei5Vc9jd Are you and your teams leveraging varied hardware workloads on Kubernetes with a combination of CPUs and GPUs? I’d love to hear from you in the comments! #kubernetes #cloudcomputing #ai #ml
Kubernetes v1.31: Elli
kubernetes.io
To view or add a comment, sign in
-
⚙Tackling Node Overhead in Kubernetes? Zesty's Senior Engineer Omer Hamerman breaks down how to optimize costs and manage resources efficiently in his latest article, "Uncovering and Reducing Node Overhead Costs in Kubernetes". Check out the article if you want to learn how to: 🔥 Choose the right instance types for your workload. 🔥 Use Spot Instances for cost savings. 🔥 Configure DaemonSets to only run where necessary. 📖 Read more: https://lnkd.in/dv52MsiJ #Kubernetes #DevOps #CloudCostOptimization
Reduce Node Overhead Costs in Kubernetes
zesty.co
To view or add a comment, sign in
-
Kuberntes v1.30 - Key new features summarized from https://lnkd.in/gpxkx6NP, 1. Container Resource based pod auto-scaling [STABLE] - Focuses resource utilisation of an individual pod and scale pods accordingly. 2. Structured Parameters for Dynamic Resource Allocation [ALPHA] - Useful when cpu and memory aren't the only metrics based on which kubeapi-server schedules pod in a node. - k8s can directly interact with drivers for provisioning of external resources in a worker node based on the pod workload requirement before scheduling it in a node. - k8s scheduler can directly access the various resources through resourceslices that provide detailed information about the resources available and then launch pods using resourclaims, thereby speeding up the pod scheduling process and efficiency. 3. Node Memory Swap Support [Earlier Disabled] - Allows pods to use limited amount of swap space upto memory limits set for a pod - LimitedSwap value to be set in kubelet config MemorySwap.SwapBehavior 4. Structured Authorization Configuration [BETA] - Enhanced to support dynamic reloading and detailed metrics reporting. - kubeapi server actively monitors the authorization config file for any changes and applies the changes without restart. 5. Usernamespace in Pods [BETA] - Specifies exact ranges for user and group ids to be created by kubelets in pods usernamespace. - Prevents overlapping ids between pods and hosts. 6. Speedup Recursive SELinux Label Change [BETA] - SELinux contexts can be set manually via PodSpec or allow the container runtime to automatically assign them. - Mount volumes with specific SELinux contexts using the -o context= option during the first mount to ensure the correct security labeling. 7. Job Success/Completion Policy [ALPHA] - leader pod's successful job completion is enough to mark an indexed job run as successful rather than waiiting for other pods job completion. - Enhanced flexibility for running batch jobs saving computational resources with terminating other worker pods. 8. Interactive Flag Added for kubectl delete command [STABLE] - Lists the resouces with prompt to confirm deletion. 9. Routing Preferences for Services [ALPHA] - Preferences set through spec.trafficDitribution allows user to control how the traffic is handled w.r.t to client's network proximity. - values like preferClose allows kube-proxy to prioritize the traffic to the endpoints that are in the same zone as client. - Earlier traffic routing was distributes to all the available endpoints in cluster iirespective of geographic or network proximity. #k8s #kubernetes
Kubernetes v1.30 Release: What's New and Improved? | Uwubernetes | KodeKloud
https://www.youtube.com/
To view or add a comment, sign in
-
Hello Tech Lovers, Following my last post on Kubernetes, I share my latest Medium post where I tackle another crucial aspect of Kubernetes management—optimizing resource requests and limits for your pods. Properly configuring these settings can make all the difference in maintaining a balanced, efficient cluster without waste your money : https://lnkd.in/e5_mJf4G As always the article is based on my more than 5 years knowledge of running kubernetes cluster, and help people avoid the mistakes I’ve encountered Happy reading ;) #kubernetes
Optimizing resource requests and limits for kubernetes pods
hervekhg.medium.com
To view or add a comment, sign in
-
🚀 Mastering Multi-Architecture Docker Images for Kubernetes Deployments 🚀 With the rise of ARM-based processors, optimizing your Kubernetes deployments for both ARM and x86 architectures is crucial. 🖥️📈 In my latest article, I walk you through how to create multi-architecture Docker images that work seamlessly across ARM and x86 platforms. By using Docker's and Podman's multi-architecture features, you can: ✅ Simplify Deployment: Use a single image tag for both ARM and x86 nodes. ✅ Reduce Complexity: Avoid managing separate images or tags. ✅ Optimize Resources: Leverage cost-effective ARM instances without hassle. ✅ Ensure Consistency: Maintain application behavior across different architectures. This approach streamlines your CI/CD pipeline and future-proofs your applications. Check out the full guide for a step-by-step process on building and deploying multi-architecture images! 💡🔧 #Docker #Kubernetes #DevOps #MultiArchitecture #ARM #x86 #CloudComputing https://lnkd.in/e5cV5Ugg
Mastering Multi-Architecture Docker Images for Kubernetes Deployments Across ARM and x86 Platforms
medium.com
To view or add a comment, sign in
-
Hi everyone, I'm excited to share my another blog on admission controllers in Kubernetes, featuring Istio concepts with practical examples explained in very simple words. Please give it a read and let me know your thoughts! https://lnkd.in/ddCc54HW Stay tuned for more insights on Istio’s Virtual Services, Gateways, and Kiali in upcoming posts. #DevOps #Istio #Kubernetes #ServiceMesh #TechBlog #Microservices
Admission Controllers in Kubernetes with Istio: In Simple Words
medium.com
To view or add a comment, sign in
-
Today is #day26 as part #cka program. In this blog, we will explore Understanding Resource Requirements in Kubernetes with practical example to demonstrate their usage. TABLE OF CONTENTS 🗼Introduction 🗼Resource Requests 🗼Resource Limits 🗼Exceeded Limits 🗼Conclusion Special Thanks for Inspiring me throughout my journey: Prasad Suman Mohan, Shubham Londhe, Savinder Puri Thank you for your continued support, and let's embark on this CKA journey together! 🌐📚💡 #limits #requests #pods #LearnInPublic #authentication #security #kubernetes #architecture #docker #CKA #Kubernetes #ApplicationDevelopment #BlogSeries #LearningJourney #CloudNative #devops #cloud #learning #experience #management #connections #developer #security #power #engineers #techblog #kubernetesobjects
Understanding Resource Requirements in Kubernetes
ashutoshamblogs.hashnode.dev
To view or add a comment, sign in
-
🚀 𝗗𝗮𝘆 𝟯: 𝗧𝗿𝗼𝘂𝗯𝗹𝗲𝘀𝗵𝗼𝗼𝘁𝗶𝗻𝗴 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 - 𝗡𝗼𝗱𝗲 𝗡𝗼𝘁 𝗥𝗲𝗮𝗱𝘆 🚀 🔍 𝗪𝗵𝗮𝘁 𝗜𝘁 𝗠𝗲𝗮𝗻𝘀: This error indicates that a node in your Kubernetes cluster is not ready to accept workloads. It could be due to several reasons, including network issues, resource constraints, or kubelet problems. 🚫🖥️ 𝗖𝗼𝗺𝗺𝗼𝗻 𝗖𝘂𝗹𝗽𝗿𝗶𝘁𝘀 : 𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗘𝘅𝗵𝗮𝘂𝘀𝘁𝗶𝗼𝗻 : The node might be overloaded with memory (), CPU (), or disk usage (🈵), hindering its ability to accept new pods. 𝗞𝘂𝗯𝗲𝗹𝗲𝘁 𝗪𝗼𝗲𝘀 : The Kubelet, a crucial agent running on each node, might malfunction () or fail to communicate with the API server (). 𝗗𝗼𝗰𝗸𝗲𝗿 𝗗𝗿𝗮𝗺𝗮𝘀 : Underlying Docker issues on the node can prevent proper container runtime (❗). 𝗡𝗲𝘁𝘄𝗼𝗿𝗸 𝗡𝗶𝗴𝗵𝘁𝗺𝗮𝗿𝗲𝘀 ️: If the node cannot communicate with the API server or other nodes in the cluster, it might appear "Not Ready" (♂️). 𝗧𝗿𝗼𝘂𝗯𝗹𝗲𝘀𝗵𝗼𝗼𝘁𝗶𝗻𝗴 𝗦𝘁𝗲𝗽𝘀 : 𝗜𝗻𝘃𝗲𝘀𝘁𝗶𝗴𝗮𝘁𝗲 𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗨𝘀𝗮𝗴𝗲 : Use kubectl describe node <node_name> to inspect resource utilization on the problematic node. Look for resource bottlenecks like high CPU usage () or low memory availability (). 𝗖𝗵𝗲𝗰𝗸 𝗞𝘂𝗯𝗲𝗹𝗲𝘁 𝗟𝗼𝗴𝘀 : Examine Kubelet logs using journalctl -u kubelet on the affected node to pinpoint potential issues with the Kubelet itself (). 𝗩𝗲𝗿𝗶𝗳𝘆 𝗗𝗼𝗰𝗸𝗲𝗿 𝗦𝘁𝗮𝘁𝘂𝘀 : Use systemctl status docker to ensure Docker is running healthily on the node (✅). Restart Docker if necessary using systemctl restart docker (). 𝗧𝗿𝗼𝘂𝗯𝗹𝗲𝘀𝗵𝗼𝗼𝘁 𝗡𝗲𝘁𝘄𝗼𝗿𝗸 𝗖𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝘃𝗶𝘁𝘆 : Utilize tools like ping or nslookup to verify network connectivity between the node and the API server (). 𝗕𝗼𝗻𝘂𝘀 𝗧𝗶𝗽: Leverage Kubernetes' built-in health checks (️) to proactively detect and address node issues before they disrupt your deployments (). By following these steps, you should be able to diagnose the root cause of the Node Not Ready error and get your node back in the game (). Stay tuned for more installments in this Kubernetes troubleshooting series! #Kubernetes #DevOps #CloudComputing #Troubleshooting #TechSeries #KubernetesErrors #ContinuousLearning
To view or add a comment, sign in
-
🚀 Kubernetes v1.31: New CPUManager Static Policy ⚙️ Kubernetes v1.31 introduces an exciting update with the CPUManager Static Policy 🧠, which brings optimized CPU distribution across cores for better performance! 🎯 🔍 What's New? Balanced CPU allocation across available cores 🔄 Improved workload performance for high-demand applications 🚀 Enhanced efficiency for latency-sensitive tasks 🕒 Get ready to leverage smarter resource allocation and boost your Kubernetes clusters! 🌟 https://lnkd.in/gaF5FUWx #razorops #cicd #pipeline #Kubernetes #K8s #CPUManagement #DevOps #CloudNative #KubernetesUpdate #PerformanceBoost
To view or add a comment, sign in
-
🚀 Optimizing Kubernetes with QoS Classes! 🚀 In Kubernetes, Quality of Service (QoS) classes are essential for managing resource allocation and ensuring the stability of your applications. They help define resource guarantees and limits, prioritizing workloads especially during high demand. ✨ **Quick Overview:** 1️⃣ **Guaranteed:** Provides stable performance, perfect for mission-critical apps. 2️⃣ **Burstable:** Balances between resource limits and minimums for flexible workloads. 3️⃣ **BestEffort:** Uses minimal resources, ideal for non-essential tasks. **🔍 Example:** Let’s consider a critical payment app, a job scheduler, and a log cleaner. - **Guaranteed:** ```yaml apiVersion: v1 kind: Pod metadata: name: payment-processor spec: containers: - name: app image: payment-app:latest resources: requests: cpu: "500m" memory: "1Gi" limits: cpu: "500m" memory: "1Gi" ``` Ensures the payment app always gets 500m CPU and 1Gi memory. - **Burstable:** ```yaml apiVersion: v1 kind: Pod metadata: name: job-scheduler spec: containers: - name: app image: scheduler-app:latest resources: requests: cpu: "100m" memory: "256Mi" limits: cpu: "500m" memory: "1Gi" ``` Allows the job scheduler to use extra resources if available but ensures a minimum. - **BestEffort:** ```yaml apiVersion: v1 kind: Pod metadata: name: log-cleaner spec: containers: - name: app image: log-cleaner-app:latest ``` Runs the log cleaner with whatever resources are left, without interfering with critical tasks. 🛠️ By leveraging QoS classes, you can optimize resource usage, enhance reliability, and maintain application performance. 💡 **Tip:** Regularly monitor and adjust QoS settings based on your workload needs and cluster capacity. #Kubernetes #QoS #DevOps #CloudComputing #TechTips #Containers #Infrastructure 🔗 [Learn more about Kubernetes QoS](https://lnkd.in/g_YzgwKa)
Configure Quality of Service for Pods
kubernetes.io
To view or add a comment, sign in
More from this author
-
How I Hosted a POC Project for My Upwork Freelancing Client Seamlessly with VKube
Nimesha Jinarajadasa 1d -
While One Sleeps, Another Works: My Year-End Reflection on KodeKloud's Async Success
Nimesha Jinarajadasa 2w -
You Don’t Know the Struggle of ‘kubectl get pods’ Until You Type It Yourself
Nimesha Jinarajadasa 2w