OWASP Top 10:2021 — A08: Software and Data Integrity Failures
Welcome back to this OWASP Top 10:2021 security series. Today, we’re stepping away from code bugs and logic flaws to talk about something much sneakier: trust — specifically, the trust we place in the build, deployment, and dependency systems that make modern software possible.
When that trust is broken, you can do everything else right and still end up hacked.
Full series:
- A01: Broken Access Control
- A02: Cryptographic Failures
- A03: Injection Attacks
- A04: Insecure Design
- A05: Security Misconfiguration
- A06: Vulnerable and Outdated Components
- A07: Identification and Authentication Failures
- A08: Software and Data Integrity Failures (you are here)
- A09: Security Logging and Monitoring Failures
- A10: Server-Side Request Forgery (SSRF)
What Does This Category Cover?
This category focuses on failures to verify the integrity of software, data, and systems.
Key areas include:
- CI/CD pipeline vulnerabilities
- Insecure software updates
- Unsigned or unverified packages
- Misconfigured deserialisation or object injection
- Dependency poisoning (e.g., Typosquatting)
Think of it as attacking the supply chain — from code to production.
Common Vulnerabilities
Insecure CI/CD Pipelines
Attackers target build environments, inject malicious code or secrets, or manipulate test outputs to get backdoors into production.
Unsigned or Tampered Software Packages
Many projects download dependencies over HTTP or run install scripts from curl | bash style commands — which can be intercepted or manipulated. |
Unverified Software Updates
Auto-update systems that don’t verify digital signatures (or don’t use them at all) can be hijacked to distribute malware.
Deserialization of Untrusted Data
Apps that blindly deserialise input (Java, PHP, Python pickle, etc.) open the door to code execution if that data structure is malicious.
How Attackers Exploit This
Dependency Confusion
A famous attack in 2021 exploited the fact that internal packages (like @company/internal-lib) weren’t published to public registries. An attacker uploaded their own version to npm/PyPI — and it got pulled in by default by the CI pipeline.
Result? Internal systems executed the attacker’s code.
Compromising Build Systems
Nation-state actors have targeted CI servers like Jenkins and TeamCity, compromising build artifacts at the source. Victims: SolarWinds, CodeCov, and others.
Malicious Update Injection
Fake updates from compromised servers (or DNS hijacks) deliver malware to thousands of endpoints. If there’s no signature verification, there’s no defense.
How Engineers Can Defend Against This
This isn’t just about “coding better” — it’s about building safer and securing your entire toolchain.
- Use Signed Packages and Artifacts
- Prefer registries that support signature verification (e.g., npm with sigstore, Docker Content Trust)
- Verify checksums or hashes during installs
- Use tamper-proof systems like cosign to sign images and binaries
- Lock Down CI/CD Pipelines
- Restrict access to pipeline secrets and configs
- Use ephemeral build runners where possible
- Validate all inputs, outputs, and deployments
- Isolate environments (don’t build and deploy from the same container)
- Implement Integrity Checks
- Use Subresource Integrity (SRI) for JS/CDN assets
- Run hash or signature checks on downloaded tools and updates
- Require signed commits for critical repos
- Defend Against Deserialisation Flaws
- Avoid deserialisation unless absolutely necessary
- Never deserialise untrusted input
- Use safe formats (JSON, Protobuf) instead of raw binary objects
- Apply strict class whitelisting
- Monitor for Dependency Poisoning
- Audit all dependencies — even dev/test ones
- Pin exact versions
- Host critical libraries internally if needed
Real-World Case: SolarWinds Orion
Attackers compromised the SolarWinds build process to inject a malicious DLL into the company’s Orion network monitoring software.
The software was signed, packaged, and distributed as normal — and used by thousands of companies and government agencies.
It wasn’t a vulnerability in the software. It was a vulnerability in the process.
Final Thoughts
Software supply chain attacks are scary because they don’t look like traditional vulnerabilities. They don’t exploit your business logic — they exploit the trust you place in your tools and processes.
The only solution? Assume nothing is trustworthy by default — and prove the integrity of every step from code to production.
Next up: A09: Security Logging and Monitoring Failures → We’ll talk about the silent failures that keep you in the dark when bad things happen — and how to fix them before it’s too late.
Leave a comment