Blog AI/ML The GitLab AI Security Framework for security leaders
Published on: March 4, 2025
4 min read

The GitLab AI Security Framework for security leaders

Discover how GitLab Duo's security controls, third-party integrations, and retention policies help teams safely implement AI into their development workflow.

security cover - new

As companies rapidly adopt AI technologies, CISOs face a new frontier of security challenges. Many security leaders find themselves grappling with unfamiliar questions: How do we evaluate AI vendors differently from traditional software vendors? What security controls matter most? Where does vendor responsibility end and customer responsibility begin? How do we evaluate AI security risks within the context of the service provided? To help answer these questions, we’ve created the GitLab AI Security Framework to show security leaders how GitLab and customers enable secure AI-powered development using GitLab Duo.

The genesis of AI security challenges

From conversations with security leaders across industries a pattern has emerged: Organizations are rapidly embracing AI technologies to improve delivery while their security teams struggle to establish appropriate security controls.

This disconnect isn't just a matter of resources or expertise – it represents a fundamental shift in how organizations need to approach security in the AI era. Security leaders are witnessing quick and unprecedented adoption of AI across their organizations, from development teams using coding assistants to marketing departments leveraging generative AI.

While organizations are integrating AI within their own software, many of their current vendor-provided SaaS applications have added AI capabilities as well. Although this adoption drives innovation and efficiency, it also creates a complex set of security considerations that traditional frameworks weren't designed to address. Below are some of the specific challenges we’ve identified.

Security challenges in the AI era

1. Responsibility and control uncertainty

The rapid pace of AI adoption has left many organizations without a coherent security governance strategy. Security teams find themselves trying to retrofit existing security frameworks to address AI-specific concerns. Security leaders face challenges in understanding where their responsibilities begin and end when it comes to AI security. The traditional vendor-customer relationship becomes more complex with AI systems, as data flows, model training, and inference processes create new types of interactions and dependencies.

2. Risk assessment evolution

Traditional security risk models struggle to capture the unique characteristics of AI systems. Security leaders are finding that standard risk assessment frameworks don't adequately address AI-specific risks. AI security risks will differ based on AI implementation and the context in which it’s used. The challenge is compounded by the need to evaluate AI vendors without necessarily having deep technical AI expertise on the security team.

3. Data protection complexities
AI systems present unique challenges for data protection. The way these systems process, learn from, and generate data creates new privacy and security considerations that organizations should carefully evaluate. CISOs must ensure their data governance frameworks evolve to address how AI systems use and protect sensitive information. AI implementations with inadequate safeguards might inadvertently reveal protected information via AI generated outputs.

4. Compliance and standards navigation
The regulatory landscape for AI security is rapidly evolving, with new standards like ISO 42001 and others emerging alongside existing frameworks. Security leaders must navigate this complex environment while ensuring their AI implementations remain compliant with both current and anticipated regulations. This requires a delicate balance between enabling AI adoption and maintaining robust security controls that satisfy regulatory requirements.

Addressing these challenges

With the release of GitLab Duo, we recognized these executive-level concerns and developed a comprehensive framework to help organizations navigate AI security in the context of our AI-powered DevSecOps platform. Our AI Security Framework provides details on our privacy-first implementation of AI to enable GitLab Duo, and how we validate the security of our AI vendors. A responsibility matrix is included to help security leaders manage their AI security responsibilities while enabling their organizations to innovate safely. We also compiled a selection of AI-specific security risks to keep in mind and highlighted how GitLab capabilities like prompt guardrails can help in mitigating them.

Want a deeper look at our security controls? Check out our AI Security Framework.

Learn more

We want to hear from you

Enjoyed reading this blog post or have questions or feedback? Share your thoughts by creating a new topic in the GitLab community forum. Share your feedback

Ready to get started?

See what your team could do with a unified DevSecOps Platform.

Get free trial

Find out which plan works best for your team

Learn about pricing

Learn about what GitLab can do for your team

Talk to an expert