Are you tired of struggling to understand complex architecture diagrams? Do you wish there was a way to make documenting them easier and more efficient? Using GenAI, Reading architecture diagram from an image and giving you all the description. >>> /Users/prakashsharma/Desktop/Single-tenant.jpg tell me what this image is about? Added image '/Users/prakashsharma/Desktop/Single-tenant.jpg' The diagram shows two separate AWS accounts (tenant 1 and tenant 2) with their own SaaS control planes, which are connected to each other using an AWS service called SaaS Control Plane. The tenants have multiple Availability Zones and subnets for redundancy. Key Features: Multi-AZ Deployment: Each account has two Availability Zones for high availability and disaster recovery. Subnet Configuration: Subnets are configured to ensure that instances can communicate with each other within the same zone. SaaS Control Plane Connection: The SaaS control planes of both tenants are connected, allowing them to share resources and services. Resource Sharing: Resources such as Auto Scaling groups, RDS instances, and Elastic Block Store (EBS) volumes can be shared between the two tenants. Benefits: Improved High Availability: With multiple Availability Zones, applications can continue running even if one zone goes down. Enhanced Disaster Recovery: Subnets are configured to ensure that instances can communicate with each other within the same zone. Simplified Resource Management: The SaaS control plane connection allows for easy sharing of resources and services between tenants. Conclusion: The diagram shows a well-designed architecture for multi-tenancy in AWS, with multiple Availability Zones and subnets configured to ensure high availability and disaster recovery. The SaaS control plane connection enables resource sharing and simplifies management across both tenants. >>>/exit
Prakash Sharma’s Post
More Relevant Posts
-
Greetings Everyone, Please find the new architecture design of Google Cloud's Global Server Load Balancing for Disaster recovery, 100% Availability, Fault tolerance and Blue-Green Deployments 🌍 Geographical Resilience: Four regions across Europe, APAC, and North America ensure geographic diversity. Geographical distribution enhances Disaster Recovery capabilities. 🔄 Fault Tolerance with SD-WAN: SD-WAN dynamically optimizes network resources for enhanced Fault Tolerance. Adaptable network infrastructure ensures robust connectivity under varying conditions. 🌐 Intelligent Traffic Management: GSLB intelligently distributes traffic based on server load and proximity. Ensures 100% Availability by redirecting traffic from affected or overloaded regions. 🚑 Failover Mechanism: GSLB is equipped with failover mechanisms for swift traffic redirection during disruptions. Enhances overall system availability and mitigates the impact of service outages. 🔄 Blue-Green Deployments: Parallel Blue-Green environments facilitate controlled software releases. Gradual traffic shifts during deployments minimize downtime and ensure user continuity. 🚀 Content Delivery Network Integration: CDN strategically caches and delivers content from edge locations. Accelerates content delivery, reducing latency for an improved user experience. 🎯 Traffic Optimization: GSLB optimizes traffic distribution based on real-time conditions. Dynamic adaptation ensures a responsive and efficient system. 🌐 Resilient Disaster Recovery Strategy: Comprehensive GSLB architecture forms the backbone of a resilient DR strategy. Geographic diversity and failover mechanisms bolster recovery capabilities. 🔄 Agile Software Deployments: Blue-Green deployments offer flexibility and agility in releasing new features. Controlled transitions minimize disruptions during software updates. 👤🔍 User-Centric Performance: Prioritizes user experiences with efficient content delivery and responsive traffic management. Overall architecture underscores a commitment to operational excellence in cloud computing. This content is purely for the benefit of the technical, academic, and architectural community. Medium Link: https://lnkd.in/gQNju5bZ
To view or add a comment, sign in
-
Greetings Everyone, Please find the new architecture design of Google Cloud's Global Server Load Balancing for Disaster recovery, 100% Availability, Fault tolerance and Blue-Green Deployments 🌍 Geographical Resilience: Four regions across Europe, APAC, and North America ensure geographic diversity. Geographical distribution enhances Disaster Recovery capabilities. 🔄 Fault Tolerance with SD-WAN: SD-WAN dynamically optimizes network resources for enhanced Fault Tolerance. Adaptable network infrastructure ensures robust connectivity under varying conditions. 🌐 Intelligent Traffic Management: GSLB intelligently distributes traffic based on server load and proximity. Ensures 100% Availability by redirecting traffic from affected or overloaded regions. 🚑 Failover Mechanism: GSLB is equipped with failover mechanisms for swift traffic redirection during disruptions. Enhances overall system availability and mitigates the impact of service outages. 🔄 Blue-Green Deployments: Parallel Blue-Green environments facilitate controlled software releases. Gradual traffic shifts during deployments minimize downtime and ensure user continuity. 🚀 Content Delivery Network Integration: CDN strategically caches and delivers content from edge locations. Accelerates content delivery, reducing latency for an improved user experience. 🎯 Traffic Optimization: GSLB optimizes traffic distribution based on real-time conditions. Dynamic adaptation ensures a responsive and efficient system. 🌐 Resilient Disaster Recovery Strategy: Comprehensive GSLB architecture forms the backbone of a resilient DR strategy. Geographic diversity and failover mechanisms bolster recovery capabilities. 🔄 Agile Software Deployments: Blue-Green deployments offer flexibility and agility in releasing new features. Controlled transitions minimize disruptions during software updates. 👤🔍 User-Centric Performance: Prioritizes user experiences with efficient content delivery and responsive traffic management. Overall architecture underscores a commitment to operational excellence in cloud computing. This content is purely for the benefit of the technical, academic, and architectural community. Medium Link: https://lnkd.in/gQNju5bZ
To view or add a comment, sign in
-
𝐇𝐨𝐰 𝐃𝐢𝐬𝐚𝐬𝐭𝐞𝐫 𝐑𝐞𝐜𝐨𝐯𝐞𝐫𝐲 𝐖𝐨𝐫𝐤𝐬 𝐢𝐧 𝐭𝐡𝐞 𝐀𝐖𝐒 𝐂𝐥𝐨𝐮𝐝❗ In today's interconnected world, businesses face a growing array of potential disruptions, AWS Disaster Recovery offers a comprehensive set of services and features to help businesses plan for and recover from disruptions, ensuring their resilience and ability to withstand any storm. ✅Data Replication: Safeguarding Your Critical Information AWS offers a variety of data replication services, such as AWS Storage Gateway, Amazon S3, and AWS DataSync, to ensure that your data is consistently replicated to a secondary location. ✅Compute Resources: Powering Your Recovery Efforts In the event of a disaster, AWS provides a wide range of compute resources, such as Amazon EC2 instances, to quickly provision and deploy your applications and services in the secondary location. ✅Automated Scaling: Adapting to Fluctuating Demands AWS Auto Scaling allows your applications to automatically adjust their capacity based on demand. ✅Load Balancing: Distributing Traffic for Optimal Performance Elastic Load Balancing (ELB) helps distribute incoming application traffic across multiple targets, ensuring high availability and fault tolerance. ✅Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO): Tailoring Your Recovery Plan AWS provides tools to set RTO and RPO targets for your applications. These targets define the acceptable downtime and data loss in a disaster scenario. ✅AWS Disaster Recovery Architecture Network Components The AWS Disaster Recovery architecture is built on a foundation of secure and reliable network components: 💥Amazon VPC (Virtual Private Cloud):provides isolated network environments 💥AWS Direct Connect or VPN: provide secure connectivity between your primary and secondary environments. 💥AWS Backup and Storage Gateway:provide data storage and replication capabilities. 💥Load Balancers: distribute incoming traffic across multiple targets for optimal performance. 💥AWS Backup and Storage Gateway: manage data backup, recovery, and replication. By leveraging these components, AWS Disaster Recovery empowers businesses to safeguard their critical systems and data, ensuring their resilience and ability to thrive in the face of disruptions. Cloudairy Cloudchart provides an easy-to-use platform for designing cloud architectures with several practical features: ✅ Simple drag-and-drop for adding components to diagrams. ✅ Customizable diagrams to fit specific needs. ✅ Real-time collaboration allows teams to work together online. ✅ Version control to track and revert changes. ✅ Options to export diagrams in various formats. ✅ Animated diagram flows for clearer visualization.
To view or add a comment, sign in
-
𝐇𝐨𝐰 𝐃𝐢𝐬𝐚𝐬𝐭𝐞𝐫 𝐑𝐞𝐜𝐨𝐯𝐞𝐫𝐲 𝐖𝐨𝐫𝐤𝐬 𝐢𝐧 𝐭𝐡𝐞 𝐀𝐖𝐒 𝐂𝐥𝐨𝐮𝐝❗ In today's interconnected world, businesses face a growing array of potential disruptions, AWS Disaster Recovery offers a comprehensive set of services and features to help businesses plan for and recover from disruptions, ensuring their resilience and ability to withstand any storm. ✅Data Replication: Safeguarding Your Critical Information AWS offers a variety of data replication services, such as AWS Storage Gateway, Amazon S3, and AWS DataSync, to ensure that your data is consistently replicated to a secondary location. ✅Compute Resources: Powering Your Recovery Efforts In the event of a disaster, AWS provides a wide range of compute resources, such as Amazon EC2 instances, to quickly provision and deploy your applications and services in the secondary location. ✅Automated Scaling: Adapting to Fluctuating Demands AWS Auto Scaling allows your applications to automatically adjust their capacity based on demand. ✅Load Balancing: Distributing Traffic for Optimal Performance Elastic Load Balancing (ELB) helps distribute incoming application traffic across multiple targets, ensuring high availability and fault tolerance. ✅Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO): Tailoring Your Recovery Plan AWS provides tools to set RTO and RPO targets for your applications. These targets define the acceptable downtime and data loss in a disaster scenario. ✅AWS Disaster Recovery Architecture Network Components The AWS Disaster Recovery architecture is built on a foundation of secure and reliable network components: 💥Amazon VPC (Virtual Private Cloud):provides isolated network environments 💥AWS Direct Connect or VPN: provide secure connectivity between your primary and secondary environments. 💥AWS Backup and Storage Gateway:provide data storage and replication capabilities. 💥Load Balancers: distribute incoming traffic across multiple targets for optimal performance. 💥AWS Backup and Storage Gateway: manage data backup, recovery, and replication. By leveraging these components, AWS Disaster Recovery empowers businesses to safeguard their critical systems and data, ensuring their resilience and ability to thrive in the face of disruptions. ✅ Simple drag-and-drop for adding components to diagrams. ✅ Customizable diagrams to fit specific needs. ✅ Real-time collaboration allows teams to work together online. ✅ Version control to track and revert changes. ✅ Options to export diagrams in various formats. ✅ Animated diagram flows for clearer visualization.
To view or add a comment, sign in
-
Top Tips for Deploying Services on Kubernetes Deploying applications on Kubernetes can greatly enhance scalability, resilience, and efficiency. However, to ensure a smooth and successful deployment, here are some best practices that can help: Use Helm for Package Management 🎯 Helm is a Kubernetes package manager that simplifies application deployment. With Helm, you can create reusable charts and manage complex deployments, reducing manual errors. Leverage Horizontal Pod Autoscaling (HPA) for Scalability 📈 Kubernetes' HPA allows your services to scale automatically based on CPU, memory, or custom metrics. This ensures your application can handle traffic spikes without manual intervention. Implement Rolling Updates for Zero Downtime ⏳ Use Kubernetes' rolling update feature to ensure that your application updates don’t cause downtime. It will replace your old Pods with new ones gradually, making sure the service remains available. Monitor Resource Usage and Set Limits 🔍 Defining resource requests and limits ensures that your services have enough CPU and memory to function properly, preventing noisy neighbors from affecting your service performance. Leverage Kubernetes ConfigMaps and Secrets 🔑 Use ConfigMaps for storing non-sensitive configuration data and Secrets for handling sensitive information like API keys or passwords. This keeps your configurations secure and manageable. Enable Liveness and Readiness Probes 🩺 Set up liveness and readiness probes to ensure Kubernetes can detect unhealthy pods and restart them automatically. Readiness probes ensure your service isn't exposed until it’s ready. Secure Your Cluster with Role-Based Access Control (RBAC) 🔒 Define permissions carefully using RBAC to restrict access to sensitive resources in your cluster. This enhances security by ensuring only authorized users and services can interact with critical components. Set Up Centralized Logging and Monitoring 📊 Use tools like Prometheus for monitoring and ELK Stack (Elasticsearch, Logstash, Kibana) for logging to track your application's performance and troubleshoot issues effectively. Plan for Backup and Disaster Recovery 📦 Ensure that persistent data stored in Persistent Volumes (PV) is backed up, and you have a disaster recovery plan in place to avoid data loss during critical failures. Use Network Policies for Service Security 🚧 Implement network policies to control traffic flow between pods. This helps secure communication and limits exposure to unnecessary or unauthorized network access. Kubernetes is a powerful platform for managing applications at scale, and these tips can help you optimize your deployments for high availability and efficiency.
To view or add a comment, sign in
-
Exciting announcement: Cutover AI is now available! As IT disaster recoveries, application migrations, and release processes increase in complexity, new approaches to increase efficiency are more important than ever. Cutover #AI will help to reduce manual effort and increase efficiency when using Cutover runbooks for IT operations. Cutover AI leverages Amazon Bedrock and foundation models from Amazon Titan, Claude from Anthropic and Meta’s Llama. Cutover is applying a wide range of AI models to provide greater efficiencies and benefits to enterprises when using Cutover’s automated runbooks. The Cutover platform is used by major enterprises, including the top three US banks and six of the world's largest asset management firms, to improve and innovate their IT operations for IT disaster recovery, cloud migration and application releases. Cutover AI will accelerate these operations in three key areas: Create: Cutover AI enables users to generate runbooks in seconds, not hours. Using text prompts and source content from Cutover, Confluence or other third-party data sources, Cutover AI can auto-generate runbooks with tasks and dependencies. Improve: Cutover AI can evaluate a runbook’s effectiveness and suggest intelligent improvements to help uncover potential bottlenecks or inefficiencies in your Cutover runbooks. Summarize: Cutover AI can quickly analyze the contents of a runbook and provide a short summary of what a runbook does, making it easy for users to understand a runbook at a glance without runbook managers having to manually write descriptions. “We’re excited to bring new levels of innovation to our SaaS platform and automated runbooks,” states Kieran Gutteridge, CTO of Cutover. He continues, “Our large enterprise customers have thousands of recovery and migration plans stored as Cutover runbooks. Cutover AI will make those IT operations teams more efficient and remove complexity in creating and managing runbooks.” Find out more about Cutover AI:
To view or add a comment, sign in
-
Enterprise Architecture; Piecing it together In the realm of IT, where technology shifts can reshape the landscape in an instant, enterprise architecture (EA) acts as the guiding force steering an organisations IT setup and plans. It goes beyond creating diagrams or dreaming up tech paradises; EA plays a role as a planning and communication tool, essential for aligning IT with business objectives facilitating adaptable changes and optimising technology usage across the entire organisation. Unravelling the Foundations of Enterprise Architecture Picture enterprise architecture as the master plan of an technology cityscape with diverse structures, inhabitants and the infrastructure that supports them. This holistic approach, to IT strategy extends beyond technology. It encompasses everything from information systems and processes to frameworks. Understanding EA in its entirety involves grasping the significance of its four core aspects; 1. Business Architecture; This perspective examines the organisation from a business functionality angle outlining ways to enhance operations by pinpointing enterprise changes and fostering synergy to achieve specific objectives. 2. Information Architecture; This focus revolves around the data within a company offering insights, into how data's utilised and aligned with business strategies. 3. Application Architecture: The software framework supporting your business operations and processes. 4. Technology Architecture; This realm encompasses elements like hardware, software, networks and IT resources to support business functions emphasising the importance of integration and standardisation for meeting business targets. These dimensions harmonise with each other to shape an approach that ensures an enterprises IT landscapes sustainability and adaptability. Sailing Through Transformation Challenges In an era where digital transformation's not merely optional but essential for survival enterprise architecture serves as the guiding force, in navigating through waters. Here are some ways EA can empower organisations to succeed in their transformation efforts: Establishing a Common Understanding; EA acts as a bridge to foster agreement, within the organisation regarding the future state of IT ensuring alignment among business leaders and developers. Embracing Agile Approaches; By providing a framework for development practises organisations can iteratively create, test. Implement high quality software solutions that meet evolving business needs. Strengthening IT Governance; EA policies and methods help optimise IT investments by aligning them with business goals resulting in enhanced value delivery and risk management. Through these means enterprise architecture plays a role in reshaping an organisations technology infrastructure into a agile and value driven business asset.
To view or add a comment, sign in
-
🌟 Mastering Cloud Migration Architecture 🌟 Crafting cloud solutions isn't just about technology; it's about understanding business needs, harnessing platform capabilities, and adhering to best practices. As an architect specializing in cloud migration, I've developed a holistic approach to ensure seamless transitions and optimized outcomes. Here's how: 1️⃣ Stakeholder Engagement: Start by understanding stakeholders' perspectives and project requirements, both business and technical. This ensures alignment and sets the stage for success. 2️⃣ Existing Environment Analysis: Before making any moves, thoroughly assess the current setup. This includes infrastructure, security protocols, and technical knowledge within the organization. 3️⃣ Compliance and Security: No compromise on compliance! Identify and strategize for regulatory obligations while implementing robust security measures to safeguard data. 4️⃣ Scalability and Performance: Architect for scalability, reliability, and peak performance. From auto-scaling to disaster preparedness, ensure your solution can handle anything thrown its way. 5️⃣ Cost Optimization: Keep an eye on the budget! Implement strategies to optimize costs without sacrificing quality or performance. 6️⃣ Prototyping and Refinement: Test, refine, repeat. Develop prototypes to gather insights and iterate based on stakeholder feedback. 7️⃣ Documentation and Review: Document every step and seek expert review to ensure a solid foundation for your architecture. But it doesn't stop there! Successful delivery means continuous engagement, adaptation, and optimization: 🔹 Governance and Compliance: Set up governance standards and maintain regulatory adherence throughout the project lifecycle. 🔹 Automated Deployment: Streamline deployment processes through automation, ensuring efficiency and minimizing errors. 🔹 Monitoring and Security: Implement robust monitoring and security protocols, integrating automated response mechanisms to address issues swiftly. 🔹 Cost Efficiency: Continuously track spending and adjust resource allocation to optimize expenses. 🔹 Continuous Improvement: Foster a culture of innovation and embrace emerging technologies to drive ongoing enhancement. 🔹 Stakeholder Communication: Keep stakeholders in the loop with consistent updates and inclusive involvement in decision-making processes. And don't forget to think beyond the conventional! Embrace innovative thinking to drive transformative change and exceed expectations. Every project is unique, but with the right approach and adherence to best practices, success is within reach. 💡💻 #CloudMigration #Architecture #Innovation #ContinuousImprovement
To view or add a comment, sign in
-
Building a deployment architecture isn’t just about picking the right tools—it’s about ensuring your application can grow, stay secure, and run efficiently. Dive into the essentials of crafting a deployment strategy that works for you! Why does it matter? The choices made at the beginning can be the difference between success and struggle: 🔹 Scalability & Performance: Picture effortlessly managing thousands of users, even as your traffic soars. That’s the power of smart scalability and performance optimization. 🔹 Security & Compliance: Safeguarding your data is non-negotiable. Implement robust security measures and stay compliant with industry standards to build trust and mitigate risks. 🔹 Infrastructure Choices: Whether it’s cloud, on-premises, containerization, or Infrastructure as Code (IaC), the right setup is crucial for a smooth deployment. 🔹 Monitoring & Logging: Stay ahead of issues with proactive monitoring and comprehensive logging. Ensure your app’s health is always in check before problems impact your users. 🔹 Cost Efficiency: Prevent overspending with effective budget management and cost tracking. Make every dollar work for you! 🔹 Disaster Recovery: Prepare for the unexpected with solid backup and disaster recovery plans. Keep your app running smoothly even in a crisis. 🔹 Deployment Strategy: Achieve seamless updates with zero-downtime deployments and automated rollbacks. Your users won’t even notice the changes happening behind the scenes. 🔹 CI/CD Pipeline: Streamline your build, test, and deployment processes to speed up delivery and improve reliability. Curious about how to implement these insights? Explore the full article to discover how to design a deployment architecture that’s both powerful and practical. 🔗 Read more: https://lnkd.in/gSYRSARy #DevOps #CloudComputing #InfrastructureDesign #SoftwareDeployment #ContinuousDelivery #Security #CICD
To view or add a comment, sign in
-
𝐓𝐨𝐨𝐥 𝐒𝐭𝐚𝐜𝐤 𝐟𝐨𝐫 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧 𝐊𝐮𝐛𝐞𝐫𝐧𝐞𝐭𝐞𝐬 𝐄𝐧𝐯𝐢𝐫𝐨𝐧𝐦𝐞𝐧𝐭 ❗ 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲: In today's cyber landscape, security takes center stage. Leverage the power of Aqua Security and Sysdig Secure, robust container security tools, to fortify your clusters and safeguard workloads. 𝐍𝐞𝐭𝐰𝐨𝐫𝐤𝐢𝐧𝐠: Smooth network connectivity is the lifeblood of containerized apps. Employ Kubernetes-native solutions like Calico and Cilium to effortlessly manage network policies, ensuring seamless communication among your applications. 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫 𝐑𝐮𝐧𝐭𝐢𝐦𝐞: At the core of Kubernetes lies the container runtime. Docker and other container solutions reign supreme, simplifying the management of container lifecycles and runtime environments. 𝐂𝐥𝐮𝐬𝐭𝐞𝐫 𝐌𝐚𝐧𝐚𝐠𝐞𝐦𝐞𝐧𝐭: Streamline cluster management for scalable applications with the help of tools like Kops and Rancher. They take the complexity out of cluster provisioning and upkeep. 𝐌𝐨𝐧𝐢𝐭𝐨𝐫𝐢𝐧𝐠 𝐚𝐧𝐝 𝐎𝐛𝐬𝐞𝐫𝐯𝐚𝐛𝐢𝐥𝐢𝐭𝐲: Maintain a vigilant watch over your Kubernetes environment using Prometheus for monitoring and Grafana for intuitive visualization. Remember to establish centralized logging through Fluentd or the Elastic Stack. 𝐈𝐧𝐟𝐫𝐚𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞 𝐎𝐫𝐜𝐡𝐞𝐬𝐭𝐫𝐚𝐭𝐢𝐨𝐧: To automate infrastructure provisioning and scaling, rely on indispensable tools like Terraform and Helm for efficient package management. They empower you to define and manage your infrastructure as code. When combined, these tools create a robust Kubernetes ecosystem that empowers you to securely and efficiently deploy, manage, and scale containerized applications. 💡 𝐇𝐞𝐫𝐞'𝐬 𝐰𝐡𝐚𝐭 𝐂𝐥𝐨𝐮𝐝𝐚𝐢𝐫𝐲 𝐃𝐫𝐚𝐰𝐛𝐨𝐚𝐫𝐝 𝐦𝐚𝐤𝐞𝐬 𝐢𝐭 𝐮𝐧𝐢𝐪𝐮𝐞: 𝐃𝐫𝐚𝐠-𝐚𝐧𝐝-𝐃𝐫𝐨𝐩 𝐀𝐳𝐮𝐫𝐞 𝐂𝐨𝐦𝐩𝐨𝐧𝐞𝐧𝐭𝐬:Build diagrams by dragging and dropping pre-built icons representing various Azure services like Azure Functions, Logic Apps, SQL Database, and more. 𝐂𝐮𝐬𝐭𝐨𝐦𝐢𝐳𝐚𝐛𝐥𝐞 𝐂𝐚𝐧𝐯𝐚𝐬: The canvas allows you to tailor your diagrams to your specific needs. Add labels, descriptions, or custom components to provide additional context or represent unique elements within your architecture. 𝐎𝐧𝐥𝐢𝐧𝐞 𝐂𝐨𝐥𝐥𝐚𝐛𝐨𝐫𝐚𝐭𝐢𝐨𝐧: Share your diagrams securely online with team members or external stakeholders. Everyone can access the latest version and collaborate in real-time. 𝐕𝐞𝐫𝐬𝐢𝐨𝐧 𝐂𝐨𝐧𝐭𝐫𝐨𝐥: Track changes made to your diagrams over time. Cloudairy allows you to revert to previous versions if needed. 𝐄𝐱𝐩𝐨𝐫𝐭 𝐎𝐩𝐭𝐢𝐨𝐧𝐬: Easily export your diagrams in various formats like PNG, JPG, or SVG for presentations, documentation, or sharing offline. hashtag 𝐒𝐢𝐠𝐧 𝐮𝐩 𝐭𝐨 𝐛𝐞 𝐧𝐨𝐭𝐢𝐟𝐢𝐞𝐝 𝐰𝐡𝐞𝐧 𝐂𝐥𝐨𝐮𝐝𝐚𝐢𝐫𝐲 𝐃𝐫𝐚𝐰𝐛𝐨𝐚𝐫𝐝 𝐥𝐚𝐮𝐧𝐜𝐡𝐞𝐬❗ 𝐑𝐞𝐠𝐢𝐬𝐭𝐞𝐫 𝐡𝐞𝐫𝐞: https://lnkd.in/eYrNbE5k hashtag #cloudcomputing #devops #kubernetes #cloudairy
To view or add a comment, sign in