We’re going to continue to see further investment in automatic remediation of security issues using artificial intelligence. Companies like Snyk and GitHub are investing heavily in this space. I personally think it’s good that we have a way of potentially catching up on the sheer quantity of software bugs we find in our apps. But I’m still a fan of eradicating bug classes using well defined platforms with inherent security properties. Prevention better than the cure, but also AI might make that pill less bitter to swallow. https://lnkd.in/gHTqe3RE
Cole Cornford’s Post
More Relevant Posts
-
Fixing code vulnerabilities in code repositories is one of those time consuming tasks that has to be done. It can be as simple as a package that's used being identified as a problem to something you've written, but keeping up to date and finding these has always been work someone has to do. With the latest release from Github this might be a thing of the past. CodeQL can not only detect security vulnerabilities but also automatically remedy them. This type of AI support could be a great time saver, but keep in mind, it will only do this one ones it knows about so don't leave it completely up to the machine. #ai #softwaredevelopment #security https://lnkd.in/gJXCftWY
GitHub's latest AI tool can automatically fix code vulnerabilities | TechCrunch
https://techcrunch.com
To view or add a comment, sign in
-
⚠️ Developers, beware! Hackers can poison AI models and software. Vulnerabilities found in TensorFlow CI/CD pipeline allow #malware upload and token theft. Learn about the AI/ML threat: https://lnkd.in/eKepB7mf
TensorFlow CI/CD Flaw Exposed Supply Chain to Poisoning Attacks
thehackernews.com
To view or add a comment, sign in
-
Principal Cybersecurity Cloud Solution Architect at Microsoft 🔐| Microsoft Sentinel & Defender for Cloud Blackbelt | Technology Evangelist | Security Advocate | Thought Leader | Mentor | Soccer Coach ⚽ | TechDad
🏗 Learn how to build a #CopilotforSecurity API Plugin with Microsoft's Copilot for Security (Copilot) ecosystem. 🔏 Copilot is a large language model (LLM) based generative Artificial Intelligence (AI) system for cybersecurity use cases. By using a unique plugin mechanism, Copilot can pull data from multiple sources, including third parties, making it a versatile solution for all your cybersecurity needs. Check out Part 1️⃣ of the tutorial series here: 🔗 https://lnkd.in/e8VvUzPr #Cybersecurity #ArtificialIntelligence #MicrosoftSecurity #PluginMechanism #CfS #MSFTAdvocate #RKOInsights
How to build a Copilot for Security API Plugin – Part 1
techcommunity.microsoft.com
To view or add a comment, sign in
-
CI/CD attacks are on the rise as more organizations automate their CI/CD processes and libraries like TensorFlow are used to facilitate the growing demand of AI Applications. Check Point’s CNAPP works by integrating security into every stage of development and detect and remediate potential threats all throughout the pipeline. Shift Left Security is the way to go. https://lnkd.in/gXMrikYQ
TensorFlow CI/CD Flaw Exposed Supply Chain to Poisoning Attacks
thehackernews.com
To view or add a comment, sign in
-
CISSP CISA CDPSE PCIP. Former PCI-QSA P2PE-QSA QPA with global experience of the payments and information security
So LLM’s hallucinate. Fact People are using LLM’s to create code. Fact The code includes packages that do not exist…and do it repeatedly So if you generate enough code using an LLM you will find the most common non existent packages it includes then you go create a malicious package using the hallucinated name. There you go, loads of companies including your malicious software in their software, they do all the work for you! Unlikely you say? Theoretically possible but would never happen you say? Wrong, it’s happened, and Alibaba are one of the companies that were impacted. 3 suggestions: 1) Created an SBOM and validate all packages, libraries, code, api’s used in your product so you know what you need to test and protect yourself from supply chain attacks 2) Don’t use LLM’s to write code without checking it thoroughly (which defeats the object of using LLM’s in the first place hence (3)) 3) Don’t use LLM’s in any business critical function This will only get worse. LLM’s are producing errors which are being used to train the next generation of LLM’s, Therefore the problem is self propagating - a true self eating worm https://lnkd.in/e_XKwGhA
AI bots hallucinate software packages and devs download them
theregister.com
To view or add a comment, sign in
-
Discover the implications for software developed with tools like GitHub Copilot. Learn more about securing your code against AI vulnerabilities.
AI Copilot: Launching Innovation Rockets, But Beware of the Darkness Ahead
thehackernews.com
To view or add a comment, sign in
-
Data, Analytics, & AI | Leader in Optiv's Secure AI and SOC Technology Solutions | I help teams and businesses embrace innovation and manage risk
The recent PyPI breach has exposed a critical vulnerability in the open-source ecosystem that could have devastating consequences for AI projects. 😱 Imagine this: You're working on a cutting-edge AI model that could revolutionize your industry. You rely on trusted Python libraries like Pytorch and Matplotlib to build and visualize your model. But lurking in the shadows is a malicious package, cleverly disguised with a name nearly identical to the one you intended to use. 😈 One small typo during installation, and suddenly your model is compromised. The results are skewed, sensitive data is leaked, and the integrity of your entire project is called into question. 📉 But it doesn't have to be this way! 💪 In this must-read blog post, we dive deep into the techniques attackers use to infiltrate your software supply chain, like typosquatting and dependency confusion. More importantly, we arm you with the knowledge and strategies you need to defend against these threats. 🛡️ Don't let a simple mistake derail your innovative work. Take control of your open-source security today and ensure the trust and reliability of your AI/ML projects for the future. 🌟 Read on to learn how you can fortify your defenses and stay one step ahead of the attackers targeting the very tools you rely on. Your next breakthrough depends on it! 👀 #OpenSourceSecurity #SecureAI #ProtectYourCode #InnovateWithConfidence https://lnkd.in/eyNPQEA2
Securing the Software Supply Chain from Typosquatting Attacks
optiv.com
To view or add a comment, sign in
-
This is an excellent blog post from Optiv Source Zero. I highly recommend reading this if you are developing AI or tasked with securing AI for your organization. This covers some of the current attack vectors as well as ways to reduce the risk.
Data, Analytics, & AI | Leader in Optiv's Secure AI and SOC Technology Solutions | I help teams and businesses embrace innovation and manage risk
The recent PyPI breach has exposed a critical vulnerability in the open-source ecosystem that could have devastating consequences for AI projects. 😱 Imagine this: You're working on a cutting-edge AI model that could revolutionize your industry. You rely on trusted Python libraries like Pytorch and Matplotlib to build and visualize your model. But lurking in the shadows is a malicious package, cleverly disguised with a name nearly identical to the one you intended to use. 😈 One small typo during installation, and suddenly your model is compromised. The results are skewed, sensitive data is leaked, and the integrity of your entire project is called into question. 📉 But it doesn't have to be this way! 💪 In this must-read blog post, we dive deep into the techniques attackers use to infiltrate your software supply chain, like typosquatting and dependency confusion. More importantly, we arm you with the knowledge and strategies you need to defend against these threats. 🛡️ Don't let a simple mistake derail your innovative work. Take control of your open-source security today and ensure the trust and reliability of your AI/ML projects for the future. 🌟 Read on to learn how you can fortify your defenses and stay one step ahead of the attackers targeting the very tools you rely on. Your next breakthrough depends on it! 👀 #OpenSourceSecurity #SecureAI #ProtectYourCode #InnovateWithConfidence https://lnkd.in/eyNPQEA2
Securing the Software Supply Chain from Typosquatting Attacks
optiv.com
To view or add a comment, sign in
-
Learn how to build a Copilot for Security API Plugin with Microsoft's Copilot for Security (Copilot) ecosystem. Copilot is a large language model (LLM) based generative Artificial Intelligence (AI) system for cybersecurity use cases. By using a unique plugin mechanism, Copilot can pull data from multiple sources, including third-parties, making it a versatile solution for all your cybersecurity needs. Check out Part 1 of the tutorial series here: https://lnkd.in/e8VvUzPr #Cybersecurity #ArtificialIntelligence #MicrosoftSecurity #PluginMechanism
How to build a Copilot for Security API Plugin – Part 1
techcommunity.microsoft.com
To view or add a comment, sign in
Application Security Engineer | SDLC Pro | Facilitating Dev-Sec Collaboration with Vulnerability Expertise & Multi-Language Coding
7moIt would be interesting to hear experience from non sales people about how GitHub and Snyk do with fixing real non demo code.