SlimFaas is one of the tools with the strongest Green impact that I know of! Are you using Kubernetes? It can be tested in 5 minutes and will likely reduce your CPU and RAM consumption by five or six times. A small gesture for the planet. #SlimFaas https://lnkd.in/etDaUV4m
Guillaume Chervet’s Post
More Relevant Posts
-
I found an interesting blog about Kubernetes (K8S) resource limits and requests. It dives into the importance of properly configuring these settings to optimize application performance, prevent resource starvation, and manage cluster stability. The blog explains how to set requests and limits for CPU and memory, offering practical tips and examples. If you're working with Kubernetes and want to ensure efficient resource allocation, this is a great resource to check out! 🧐
Understanding Kubernetes Limits and Requests
sysdig.com
To view or add a comment, sign in
-
𝗛𝗮𝗻𝗱𝗹𝗶𝗻𝗴 𝗘𝗽𝗵𝗲𝗺𝗲𝗿𝗮𝗹 𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿𝘀 𝗶𝗻 𝗗𝗼𝗰𝗸𝗲𝗿 𝗣𝗿𝗼𝗯𝗹𝗲𝗺: Docker containers are ephemeral by nature, meaning they don't retain data or logs once stopped. This can be challenging for applications where data persistence is essential. So, how can we ensure data availability even if a container goes down? 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝘀: 1. 𝗗𝗼𝗰𝗸𝗲𝗿 𝗩𝗼𝗹𝘂𝗺𝗲𝘀 • Volumes allow data persistence by keeping data separate from the container. You can even detach a volume from one container and attach it to another. • They’re also shareable among multiple containers, making them ideal for apps needing high input/output operations on remote machines. 2. 𝗕𝗶𝗻𝗱 𝗠𝗼𝘂𝗻𝘁𝘀 • With bind mounts, directories in the container and host are synchronized. Any changes in one reflect in the other. • Perfect for development environments where you need to sync files between host and container quickly. 1. 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 --𝗺𝗼𝘂𝗻𝘁 𝘃𝘀. -𝘃 𝗢𝗽𝘁𝗶𝗼𝗻𝘀 Both commands attach volumes or bind mounts. However, --𝗺𝗼𝘂𝗻𝘁 is more verbose and recommended for readability. 𝗘𝘅𝗮𝗺𝗽𝗹𝗲𝘀: 1. 𝗕𝗶𝗻𝗱 𝗠𝗼𝘂𝗻𝘁: • docker run -v /path/on/host:/path/in/container my-image • docker run --mount type=bind,source=/path/on/host,target=/path/in/container my-image 2. 𝗩𝗼𝗹𝘂𝗺𝗲: • docker run -d --mount source=vol_name,target=/folder_name --name container_name image_name These methods allow your Docker environment to retain data and synchronize it across environments seamlessly. 𝐅𝐨𝐫 𝐦𝐨𝐫𝐞 𝐝𝐞𝐭𝐚𝐢𝐥𝐬 𝐫𝐞𝐟𝐞𝐫 𝐦𝐲 𝐝𝐨𝐜𝐤𝐞𝐫 𝐧𝐨𝐭𝐞𝐬 https://lnkd.in/gyN4g3cV #Docker #Containerization #DevOps #DataPersistence #SoftwareDevelopment #CloudComputing #TechSolutions #BindMount #Volumes
Docker | Notion
detailed-schooner-2b4.notion.site
To view or add a comment, sign in
-
I never had a reason to CPU profile Terraform until today when I deployed 20,000 random pets, aiming to create some of the most unusual cycles in a graph imaginable. Typically, there is no reason to profile Terraform, as it will point you to results you can't really change anyways. But if you want to profile Terraform for whatever reason, you can do so using pprof: https://lnkd.in/ecMKYaFy Excess of dependencies causes Terraform to spend more CPU cycles on building and walking the dependency graph. Also, marshaling the state happens sometimes more than once, when you apply for instance, as an artifact is passed between plan and apply. But I'll spare you the all the technical details. A takeaway from today is, avoid marking objects with an explicit dependency using `depends_on` unless it’s really necessary, especially within module calls or when referencing modules, as this creates unnecessary dependencies. Don't depends_on it, depend on it.
To view or add a comment, sign in
-
Finally coming to a close on the backend for one of the services I'm working on! I just wanted to demo this feature I just added since it will most likely just get overlooked, despite its complexity. What's basically being demonstrated is a custom distributed job queue running a task, then quite literally getting its plug pulled with it successfully recovering itself all transparently to the user. This job queue is special in a couple unique ways, the most important being that, when ran across a Docker Swarm, it can scale to as many CPUs and nodes that you have available! The other way its special is that if say a node does get unplugged, a new task is just automatically spun up on a new node. Originally I wanted to just use Apache Airflow or Celery for this service, however both had some inherent issues with prepping the system beforehand that wouldn't have worked very well in this instance, but I'm kind of glad I got the chance to write a job queue and scheduler from scratch since it led to a lot of cool insights. If you you know something better that I could have used, please let me know! The final step is going to be cleaning up the queue time estimator and then cleaning it up to be open source!
To view or add a comment, sign in
-
Check out this interesting article about the compilation methods for the Rockchip Driver! It covers how to compile the driver to the kernel and as a module, with detailed steps and solutions to encountered issues. Definitely worth a read! https://lnkd.in/ejpUpGZS #RockchipDriver #CompilationMethods #DriverCompilation #RockchipPlatform #KernelDriver #ModuleCompiling #TechTips
Compilation Method for Rockchip Driver
forlinx.net
To view or add a comment, sign in
-
One of the things we love most about Nix (@nixos_org) is that it elegantly solves problems in just about any domain of computing that you can imagine. In this new post on our blog, @lucperkins provides an example of using Nix to package and distribute WebAssembly (Wasm) binaries—and even run them using only a flake reference.
Nix as a WebAssembly build tool
determinate.systems
To view or add a comment, sign in
-
This will be quite the relief for a lot of developers / SREs burdened with finding what caused spikes in CPU usages / slowed down a service. tl;dr: Typically profiling data (CPU utilisation etc.) are available at larger granularities like services / pods, but this update brings it down to the granularity of individual functions - so you would know which specific code unit is taking up the resources, enabling much faster resolution. https://lnkd.in/dPN-EgsB
OpenTelemetry announces support for profiling
cncf.io
To view or add a comment, sign in
-
Learn about image memory consumption https://lnkd.in/e_NCdFPM 💾 Reduce memory usage when loading images from disk 🤔 Memory usage considerations ⚠️ Watch out for system-level caching #swiftlang #iosdev
Memory consumption when loading UIImage from disk
https://www.avanderlee.com
To view or add a comment, sign in
-
Optimizing Node.js Application Performance with Cluster and Worker Threads
Optimizing Node.js Application Performance with Cluster and Worker Threads
medium.com
To view or add a comment, sign in