Google Cloud is addressing growing concerns about #AI security with a secure AI framework built on its internal security practices. At the recent #SICW2024, we caught up with Google Cloud's CISO Phil Venables on how the framework can help organisations manage the software lifecycle and operational risks of AI deployments
Aaron Tan’s Post
More Relevant Posts
-
“But we recognise it’s not enough to have strong foundational infrastructure for our own AI security,” Venables said. “We have to empower customers to manage AI safely and securely in their environments.” Comments from Phil Venables from the recent SICW/GovWare 2024 event. The below article defines Google Cloud's approach to safe and responsible AI with a secure AI framework that is integrated to equip businesses with tools and guidance to better manage the risks associated with AI deployments. https://lnkd.in/gSCmy5k9
Inside Google Cloud’s secure AI framework | Computer Weekly
computerweekly.com
To view or add a comment, sign in
-
It's crucial to prioritize strong AI governance, data quality, access controls, and security assessments for all AI applications, both public-facing and internal, to ensure safe, secure, and responsible AI implementation
"Oops! Five serious #genAI security mistakes to avoid" by Google Cloud Office of the CISO
"Oops! Five serious #genAI security mistakes to avoid" by Google Cloud Office of the CISO
google.smh.re
To view or add a comment, sign in
-
Orca Security needed a tool to stay ahead of the curve and keep pace with the demands of cybersecurity teams who need to easily and intuitively understand exactly what’s in their cloud environments. Orca implemented Elasticsearch, integrating advanced search capabilities to create a smarter, AI-driven search engine for its security solution. This strategic choice transformed Orca’s platform, enabling its users to easily and accurately perform complex, domain-specific searches. The blog outlines some key advantages the team at Orca Security saw in Elasticsearch.
How Orca leverages Search AI to help users gain visibility, achieve compliance, and prioritize risks
elastic.co
To view or add a comment, sign in
-
Data-in-use security/confidentiality is critical especially in public cloud environments. This challenge becomes even more nuanced and complex when considering AIML data and workloads. This fascinating blog dives into the intersection of Confidential Computing and AIML and references some equally terrific blogs on remote attestation and other related topics. https://lnkd.in/e67RMqz9
Protecting your intellectual property and AI models using Confidential Containers
redhat.com
To view or add a comment, sign in
-
Unlock the full potential of Generative AI while ensuring robust security measures with insights from this comprehensive guide via Maitreya Ranganath and Dutch Schwartz on Amazon Web Services (AWS) https://lnkd.in/d_eMWSBm #aws #awscloud
Securing generative AI: Applying relevant security controls | Amazon Web Services
aws.amazon.com
To view or add a comment, sign in
-
📣 Exciting news as we roll into #AWSreInforce this week! Today, we are thrilled to announce the extension of #AI Workload Security to Amazon Bedrock, Amazon SageMaker, and Amazon Q! Uniquely positioned with real-time detections and deep runtime visibility, Sysdig can now aid in identifying and addressing potential threats within AI workloads on AWS. By extending AI Workload Security to AWS AI services and ingesting real-time signals from AWS CloudTrail logs, Sysdig can mitigate and enable swift response to events such as: 🕵 Reconnaissance Activity 🗃️ Data Tampering 👁️🗨️ Public Exposure Stop by #reInforce booth 410 or read our blog to learn more about AI Workload Security for Amazon Web Services (AWS):
Securing AI in the Cloud: AI Workload Security for AWS | Sysdig
sysdig.com
To view or add a comment, sign in
-
🚀 Highlighting FBI’s Cloud Journey☁️💻 At the Red Hat Government Symposium, FBI’s Daniel Rubenstein shared how the bureau’s critical systems—like fingerprint matching and facial recognition—are now 95% in the cloud. 🌐 The focus? Cloud maturity and empowering developers. DOJ’s Alex Reber also shared insights on modernization, emphasizing containerization and tools like Kubernetes to enhance application delivery. 💡 Proud to see federal leaders leveraging open-source innovation to drive transformation! 📖 Read more in the article below. #RedHatGov #CloudTransformation #FBI #DOJ #AI #Kubernetes #PublicSector #DigitalTransformation
FBI Is Far Along in Cloud Journey, Official Says at Red Hat Government Symposium
https://www.meritalk.com
To view or add a comment, sign in
-
Cloud Security Alliance Releases Three Papers Offering Guidance for Successful Artificial Intelligence (AI) Implementation: Report series charts course for responsible and secure development and deployment of AIRSA Conference (San Francisco) – May 6, 2024 – The Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining standards, certifications, and best practices to help ensure a secure cloud computing environment, today issued AI Organizational Responsibilities - Core Security Responsibilities, AI Resilience: A Revolutionary Benchmarking Model for AI, and Principles to Practice: Respon...
Cloud Security Alliance Releases Three Papers Offering Guidance | CSA
cloudsecurityalliance.org
To view or add a comment, sign in
-
Best practices for securing your Machine Learning (ML) solutions on GCP: Data Security: Encryption: Encrypt data at rest and in transit with strong algorithms like AES-256. Utilize Cloud Key Management Service (KMS) for key management and granular access control. Data Access Control: Implement Identity and Access Management (IAM) for fine-grained access control based on user roles and permissions. Consider Data Loss Prevention (DLP) policies to prevent unauthorized data exfiltration. Data Provenance and Lineage: Utilize Data Catalog to track data lineage and understand its origin and usage. Enable Cloud Audit Logging and BigQuery Audit Log for data access monitoring. Data Labeling: Use clear and descriptive labels for training data to mitigate bias and poisoning attacks. Model Security: Threat Modeling: Identify potential vulnerabilities like data poisoning, bias, and model manipulation. Implement adversarial training and input validation to address these threats. Secure Model Development and Deployment: Leverage Vertex AI for a secure environment for training and deploying your ML models. Utilize Vertex AI Workspaces for controlled access and collaboration. Continuous Monitoring: Monitor model performance for drift, bias, and security issues with Vertex AI Model Monitor and Cloud Security Command Center. Explainability: Employ Vertex Explainable AI (XAI) to understand model predictions and identify potential biases. Application Security: Least Privilege: Implement IAM for fine-grained access control to all GCP resources used by your application. API Security: Utilize Apigee API Platform for secure API access and management. Validate and sanitize user input to prevent injection attacks like XSS and RCE. Vulnerability Scanning: Regularly scan your application and underlying infrastructure for vulnerabilities with tools like Security Command Center. Patch vulnerabilities promptly. Web Application Firewall (WAF): Deploy Cloud Armor to protect your application from common web attacks like SQL injection and DDoS. Configure rules based on your application's specific needs. Additional Practices: Compliance: Align your security practices with relevant industry regulations like GDPR and HIPAA. Utilize tools like Security Command Center and Data Catalog for compliance assessments. Governance: Establish clear policies and procedures for managing ML solutions with frameworks like Google Cloud Security Posture Management. Security Awareness: Train employees involved in developing and using ML on security best practices. Specific considerations for Generative Models: Input validation: Sanitize user-generated text to prevent malicious code injection. Output filtering: Prevent the model from generating harmful or offensive content. Adversarial training: Train the model on adversarial examples to improve its robustness. Resources: Google Cloud Security Best Practices: https://lnkd.in/gSQFm4zg
Cloud Security Best Practices Center | Google Cloud
cloud.google.com
To view or add a comment, sign in
-
We're excited to share how we're leveraging Elastic to deliver AI Search within the Orca Platform. AI Search helps teams use everyday language for complex cloud security tasks across different providers. Using Elasticsearch, it quickly spots risks, runs audits, and checks cloud exposure without needing deep tech expertise—making advanced cloud security more accessible than ever. This article dives into our collaborative effort, demonstrating what's possible with the right partners. Read more about our journey with Elastic: https://lnkd.in/g_zYFqhU #CloudSecurity #Cybersecurity #AI #OrcaSecurity #Elasticsearch
How Orca leverages Search AI to help users gain visibility, achieve compliance, and prioritize risks
elastic.co
To view or add a comment, sign in