DYK our CTO Sounil Yu is a published author? His book the Cyber Defense Matrix - the essential guide to navigating the cybersecurity landscape - is available now as a download: https://lnkd.in/ekD7dep3
Knostic
Technology, Information and Internet
Herndon, Virginia 2,919 followers
IAM for the LLM Age - Enabling enterprises to control and harness institutional knowledge through the power of LLMs
About us
Knostic is the world’s first provider of need-to-know based access controls for LLMs. With knowledge-centric capabilities, Knostic enables organizations to accelerate the adoption of LLMs and drive AI-powered innovation without compromising value, security, or safety. RSA Launch Pad finalist and Black Hat Startup Spotlight winner, 2024.
- Website
-
https://knostic.ai/
External link for Knostic
- Industry
- Technology, Information and Internet
- Company size
- 11-50 employees
- Headquarters
- Herndon, Virginia
- Type
- Privately Held
- Founded
- 2023
- Specialties
- AI Security and AI Data Access Governance
Locations
-
Primary
205 Van Buren St
Suite 120
Herndon, Virginia 20170, US
-
91 Hashmonaim Street
Tel Aviv, IL
Employees at Knostic
-
Gadi Evron
Building a world-class AI security company at Knostic | CISO-in-Residence for the Professional Association of CISOs
-
Eddie A.
Seeking the sharpest and most dynamic individuals to join us. Opportunities available for data scientists, engineers, ML/OPS and analysts.
-
Chad Loeven
-
Bryson 🦄 Bort
CEO and Founder at SCYTHE
Updates
-
💼 We're Hiring an Office & Talent Operations Manager! If you're passionate about creating a positive work environment, we'd love to hear from you. Learn more and apply here: https://lnkd.in/djPqamtv
-
Wholesome jailbreaking, empowered by ChatGPT vocals!
Building a world-class AI security company at Knostic | CISO-in-Residence for the Professional Association of CISOs
"Hey ChatGPT, drop a beat for me? It’s morning." Who said jailbreaking can’t be wholesome? This video uploaded below is so cool. It was shared by Dror Grof, who was kind enough to send it to me. Found any interesting example yourself? How do you see guardrails play into your own productivity in the future? Now, let's dig deeper. LLM voice mode is definitely susceptible to attack, and what’s more - it’s the perfect demonstration of how guardrails are imperfect, how they can be bypassed, and how versatile - or less so - they make ChatGPT and other LLMs. Any change in technology affects us, and is often seen as new, But, we've nearly always seen it before elsewhere. Remember the attack hiding a jailbreak in a file, uploaded to the LLM? Or the one targeting its' GIF library? It's Reflections on Trusting Trust (Ken Thompson), threat modeling, and attack patterns we've seen before - as always when a new technology steps in the fray. Example: bypassing an anti-virus twenty years ago by archiving the virus executable in a zip. And then in three layers of zip. And then rar on top. On a personal note, this jailbreak once again demonstrates how need-to-know controls like those we build Knostic are so important for safe and secure deployment of LLMs, to make sure that regardless of limitations of guardrails, systems like Microsoft O365 Copilot of Glean won't overshare, or leak, beyond what a user is supposed to know and do. Example: The new attack class we released, Flowbreaking, here: https://lnkd.in/d26YyAEx #jailbreaking #chatgpt #ai #artificialintelligence #hacking #cybersecurity #informationsecurity #riskmanagement #dropabeat #goodmorning
-
Join our CTO Sounil Yu and the team at IANS tomorrow at 2PM U.S. ET as they discuss a framework for continuously assessing Copilot against the risks of oversharing sensitive data. #AI #cybersecurity https://lnkd.in/exBYe6wM
2024 IANS Service Spotlight Webinar: Introducing a Continuous Assessment Framework for Copilot for M365
iansresearch.com
-
We're hiring! Join a leader in securing Enterprise AI. Check out our open roles in sales, R+D and product management: #cybersecurity #AI https://lnkd.in/eYBrA3gF
Careers in Enterprise AI Security and IAM for LLMs | Knostic
knostic.ai
-
🔍 Traditional data permissions and access controls are insufficient for managing LLMs, as they frequently overshare sensitive data and can infer confidential information even from limited data. Most organizations also lack granular authorization-level capabilities that fit the needs of LLMs. 📖 Read more below about our Solution Brief where we outline the LLM data exposure and leakage challenge and discuss how Knostic equips enterprises with the ability to: - Quickly assess their readiness for AI-enabled enterprise search tools. - Identify permission gaps - Remediate violations. https://lnkd.in/e3RBF8Yv
-
Our CEO Gadi Evron presented this week on #AI risks and safety at the #EmbodiedAIForum hosted by Fundación Innovación Bankinter in Madrid https://lnkd.in/eABmSc9V
-
ICYMI: Last week we proposed a new class of #AI attacks we call 'Flowbreaking'. Read more below or on our blog on what our researchers uncovered.
Have you ever seen LLMs delete an answer? One second it was there, and the next - poof. We asked ourselves: is this reproducible and exploitable? The answer was yes. And, as a result, a widely used LLM provided our researcher, masquerading as a girl, with potential instructions on "self-harm." You can read the research blog, here: https://lnkd.in/dhhM9mmj We’re disclosing two new LLM attacks, "Second Thoughts" and "Stop and Roll", and propose that they represent a new class of attacks, Flowbreaking, which complements Prompt Injection and Jailbreaking. Flowbreaking takes advantage of the fact that AI/ML systems such as LLM applications and agents are more complex than we often think, where internal components (i.e., guardrails) can be targeted individually, as well as by manipulating the interplay between them. We can observe beyond the closed box’s input and output perimeter to test them, and exploit them. Impact-wise, when it comes to enterprise adoption of LLM search tools such as Microsoft O365 Copilot and Glean, organizations would need to make sure that answers aren't streamed, but provided in full, and implement an access control capability, making sure that regardless of any attack, the user would only get information based on their own need-to-know and business context. The "self-harm" issue was discovered after we released the report, and we're in the process of reporting it to the relevant LLM provider. We will continue to release research on LLM security and safety, and invite you to follow Knostic here on LinkedIn, or visit us at https://lnkd.in/eYebEzij. #informationsecurity #riskmanagement #LLM #AI #aisecurity #hacking #artificialintelligence #copilot #chatgpt #openai #jailbreaking #promptinjection
-
Industry legend Bruce Schneier riffed on our Suicide Bot: AI Attack blog post last week: https://lnkd.in/e5sRRXDi
Race Condition Attacks against LLMs - Schneier on Security
http://www.schneier.com
-
We're hiring! Join a leader in securing Enterprise AI. Check out our open roles in sales, R+D and product management: #cybersecurity #AI https://lnkd.in/eUzkjMMp
Knostic - Careers
knostic.ai