Network & Resource Isolation
🔑 Key Takeaway: Treat network and compute as hard containment boundaries: deny egress by default, segment trust zones to prevent lateral movement, and enforce strict resource limits to block abuse and runaway jobs.
Network and compute controls are critical containment layers for sandboxed execution.
Without them, a compromised build or tool can:
- exfiltrate source code and secrets
- scan internal services and pivot laterally
- abuse infrastructure through runaway jobs
- create cost spikes via unbounded resource usage
Network isolation model
1) Default deny egress
Start with no outbound access. Add explicit allow rules only for required destinations (SCM host, package registries, artifact storage, approved APIs).
2) Separate trust zones
Use distinct network boundaries for:
- untrusted PR validation
- internal trusted builds
- release/deploy pipelines
Do not allow direct network paths from untrusted workloads to sensitive control planes.
3) Constrain DNS and metadata access
- Restrict DNS resolvers and block arbitrary external name resolution where feasible.
- Deny access to cloud metadata endpoints from untrusted jobs unless explicitly required.
4) Centralize outbound controls
For mature environments, route egress through policy-enforcing gateways/proxies to monitor and block prohibited destinations.
Resource isolation model
Enforce limits at runner and workload level:
- CPU and memory limits per job/container
- process count (
pids) limits - filesystem and artifact size limits
- execution timeout limits
- concurrency limits per workflow/repository
These controls are both security and reliability controls.
CI/CD implementation checklist
- Untrusted jobs run in isolated runner pool
- Default-deny egress applied to untrusted zone
- Destination allowlists documented per pipeline stage
- Metadata service access restricted
- CPU/memory/pids limits configured
- Job and step timeouts enforced
- Max artifact/log size capped
Example control matrix
| Workload type | Egress policy | Resource policy |
|---|---|---|
| Fork PR validation | deny-by-default, minimal allowlist | strict CPU/memory/time limits |
| Internal CI build | allow required registries/APIs only | moderate limits, queue controls |
| Release/deploy | highly specific allowlist to deployment targets | conservative limits + approval gates |
Common anti-patterns
- “Allow all egress” for convenience in CI.
- Reusing a single runner network profile for all trust levels.
- Missing job timeouts (infinite loops become outages).
- Unlimited artifact uploads from untrusted workflows.
References
- NIST SP 800-190, Application Container Security Guide: https://csrc.nist.gov/pubs/sp/800/190/final
- NIST SP 800-53 Rev. 5 (network and resource control families): https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final
- CISA, Defending Continuous Integration/Continuous Delivery (CI/CD) Environments: https://www.cisa.gov/resources-tools/resources/defending-continuous-integrationcontinuous-delivery-cicd-environments
- Kubernetes, Network Policies: https://kubernetes.io/docs/concepts/services-networking/network-policies/
- Kubernetes, Resource Management for Pods and Containers: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
- Docker, Docker Engine Security: https://docs.docker.com/engine/security/
- gVisor documentation: https://gvisor.dev/docs/
- Firecracker documentation: https://github.com/firecracker-microvm/firecracker/tree/main/docs