Zero Trust Software Architecture

Zero Trust Software Architecture

No doubt you’ve heard of zero trust, information security’s favorite buzzword! Technology security vendors label everything as a zero trust solution these days. And yet, since their implementations are so vastly different from one another, it’s difficult to grasp what zero trust stands for. 

The confusion is because, in short, zero trust is simply an idea.

NIST SP 800-207 laid out the components of a zero trust architecture in 2020, which is modeled after John Kindervag’s original think tank–generated idea in 2008 of “never trust; always verify,” achieved by focusing on reducing and protecting your attack surfaces:

  • Shrink implicit trust zones with security boundaries as much as possible. 

  • Deploy reasonable security controls on all protect surfaces.

  • Controls should include constant scrutiny of anything crossing a security boundary for verified authenticity and nonanomalous intent.

The zero trust concept applies to all technology architecture, including applications and software. Software design can achieve improved security by developing application environments with a zero trust mindset. In software architecture, this would be achieved with the intentional creation of functional security boundaries and revalidation of any person or process that attempts to cross it, as well as scrutinizing application data for indications of compromise or incompatibility. Containers (e.g., Docker) make this easier by bundling together an application and all its dependencies (i.e., reducing the attack surface of the application), and yet, there are still additional security boundaries that should be established to achieve best-practice container security. Anything traveling between security boundaries should be abstracted and/or tokenized to maintain integrity, nonrepudiation, and where required, confidentiality. This includes having a mechanism that refreshes that trust token regularly so it isn’t just “good” ad infinitum for conveying trustability.

How can you identify where security boundaries and controls need to be added?

Sound application security programs include threat modeling to analyze an application’s architecture, design, and functionalities to identify potential vulnerabilities that could impact the security of an application. Start with known application threats.

The Open Web Application Security Project (OWASP) “Top 10 Web Application Security Risks” list establishes a decent foundation for a zero trust web app architecture:

  • Identity access should be verified through encryption protocols like identity certificates. Enforce role-based access control (RBAC) for any access beyond a security boundary with meaningful user accounts (i.e., don’t run things as root, limit capabilities beyond authorized administrative accounts, and implement two-factor authentication [2FA] wherever feasible). 

  • Never trust data. Data validation mitigates injection flaws, XSS, and XML vulnerabilities. 

  • Proactively test for security misconfigurations and verify that security logging and monitoring functionality is in place to prevent broken authentications, compromised credentials, and unintentional sensitive data exposure. 

Finally, if your application handles sensitive data, reducing the attack surfaces and focusing security controls on the protect surfaces must be a priority.

For example, the STRIDE AppSec threat modeling method (where you consider each of the categories of spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privileges) could also be adopted with a zero trust mindset. Consider these threat categories to scrutinize your design and systematically identify any vulnerabilities related to confidentiality, data integrity, availability, nonrepudiation, and IAM, and then further utilize this information to determine the additional security boundaries and controls that need to exist within your application to address these vulnerabilities. And of course, test everything to verify.

AI is becoming increasingly useful in the identification of anomalous protocols and access behaviors crucial for implementing zero trust principles in application security. However, it’s essential to remain mindful of a potential issue with AI: the integrity of its training data. If the data used to train AI systems is flawed (such as in cases of AI data set poisoning), it can compromise the accuracy of any AI-driven analysis. When integrating AI into software applications, it’s imperative to incorporate robust security metrics and a stringent validation process. Defining what constitutes a successful metric is vital. Adhering to the adage of “never trust; always verify” remains key in this context.

Hooray, Jacqueline! Well done!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics