Skip to main content

What Is SAST? A Practitioner's Guide to Static Application Security Testing

Static Application Security Testing (SAST) analyzes your source code for security flaws before deployment. Here's how it actually works, when to use it, and what to watch out for.

Offensive360 Security Research Team | | Application Security
SASTstatic analysisapplication securityDevSecOpscode review

The short version

Static Application Security Testing — SAST — scans your source code, bytecode, or binary for security vulnerabilities without running the application. Think of it like a spell checker, but instead of catching typos, it catches SQL injection, hardcoded credentials, and buffer overflows.

That’s the textbook answer, anyway. The reality is a bit more nuanced.

How SAST actually works under the hood

Most SAST tools build an abstract syntax tree (AST) of your code, then run a set of rules and data-flow analyses against that tree. The good ones go further — they track how data moves through your application from sources (user input, HTTP parameters, file reads) to sinks (database queries, file system operations, command execution).

This is called taint analysis, and it’s what separates a decent SAST tool from a glorified regex scanner. When a SAST tool tells you “user input from line 14 reaches an unsanitized SQL query on line 87,” that’s taint analysis doing its job.

Here’s a concrete example. Take this Python Flask endpoint:

@app.route('/search')
def search():
    query = request.args.get('q')  # SOURCE: user input
    # ... 30 lines of business logic ...
    cursor.execute(f"SELECT * FROM products WHERE name LIKE '%{query}%'")  # SINK: SQL query
    return render_template('results.html', products=cursor.fetchall())

A good SAST tool traces query from request.args.get('q') through whatever transformations happen in those 30 lines of business logic, all the way to the cursor.execute() call. If nothing sanitizes or parameterizes that input along the way, you get a finding: SQL Injection, CWE-89.

The fixed version is straightforward:

@app.route('/search')
def search():
    query = request.args.get('q')
    # Parameterized query — the database driver handles escaping
    cursor.execute("SELECT * FROM products WHERE name LIKE %s", (f'%{query}%',))
    return render_template('results.html', products=cursor.fetchall())

What SAST catches (and what it misses)

We’ve been running static analysis on enterprise codebases for years now. Here’s our honest take on what SAST is great at and where it falls short.

SAST excels at finding:

  • Injection flaws — SQL injection, command injection, XSS, LDAP injection. These have well-defined source-to-sink patterns that data-flow analysis was basically designed for.
  • Hardcoded secrets — API keys, passwords, tokens embedded in source code. According to GitGuardian’s 2023 State of Secrets Sprawl report, they found over 10 million hardcoded secrets in public GitHub commits that year alone.
  • Insecure cryptography — Using MD5 for password hashing, ECB mode for encryption, weak key sizes. These are pattern-matching problems, and SAST tools nail them.
  • Buffer overflows in C/C++ — Unsafe string operations, unchecked array bounds. This is where SAST originally cut its teeth back in the early 2000s.
  • Configuration issues — Disabled CSRF protection, overly permissive CORS, debug mode left on in production.

Where SAST struggles:

  • Business logic flaws — A SAST tool can’t tell you that your e-commerce app lets users apply discount codes twice. It doesn’t understand business rules.
  • Authentication and authorization issues — Sure, it can flag missing auth checks in some frameworks, but complex RBAC bugs? Not really.
  • Race conditions — These depend on runtime behavior and timing. Static analysis can find some obvious cases (TOCTOU in file operations), but most slip through.
  • Runtime-dependent vulnerabilities — If your app loads configuration from a database and that config contains a URL used in SSRF-vulnerable code, SAST probably won’t catch it because the dangerous value only exists at runtime.

This is why we always recommend pairing SAST with DAST (Dynamic Application Security Testing) — they complement each other. SAST finds problems in your code; DAST finds problems in your running application.

The false positive problem

Let’s talk about the elephant in the room. False positives.

Every security team we’ve worked with has the same complaint: “We turned on SAST and got 3,000 findings on day one. Half of them are garbage.” And honestly? They’re not wrong. Early SAST tools were notorious for this.

Here’s the thing though — the technology has gotten significantly better. Modern SAST engines use interprocedural analysis, understand framework-specific sanitization, and can follow data flow across multiple files and even modules.

At Offensive360, we’ve spent a lot of engineering effort on reducing false positives specifically because we’ve heard this complaint so many times. Our approach combines traditional data-flow analysis with AI-assisted validation for languages where the pattern-matching approach hits its limits. The goal is findings you actually want to fix, not a wall of noise.

Some practical tips for dealing with false positives regardless of which tool you use:

  1. Start with high-severity findings only. Filter to Critical and High. Fix those first. You can always expand later.
  2. Suppress individual findings, not rules. When you hit a false positive, suppress that specific finding with an inline comment or tool-specific annotation. Don’t disable the entire rule — you’ll miss real bugs.
  3. Use baseline scans. Run SAST on your main branch and baseline it. Then only show new findings on pull requests. This is how you avoid the “3,000 findings on day one” problem.

Where SAST fits in your development workflow

The biggest mistake teams make with SAST is treating it as a gate at the end of the pipeline. Something that runs in a weekly scan and generates a PDF that nobody reads.

That’s how you get a backlog of 2,000 unresolved findings and a security team that everyone avoids at lunch.

The better approach — and this is pretty much consensus in DevSecOps at this point — is shifting SAST left:

IDE integration

Some SAST tools can run in the developer’s IDE and flag issues as they type. This is the fastest feedback loop possible. The developer sees the issue, fixes it immediately, and never commits vulnerable code in the first place.

Pre-commit hooks

Run a lightweight SAST scan on changed files before the commit goes through. Keep it fast — under 30 seconds — or developers will bypass it.

CI/CD pipeline

This is where the full scan happens. Run SAST on every pull request. Block the merge if there are Critical or High findings. Here’s what that looks like in a GitHub Actions workflow:

# .github/workflows/sast.yml
name: SAST Scan
on: [pull_request]
jobs:
  sast:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run Offensive360 SAST
        run: |
          curl -sSL https://get.offensive360.com/cli | bash
          o360 scan --source . --format sarif --output results.sarif
      - name: Upload SARIF
        uses: github/codeql-action/upload-sarif@v3
        with:
          sarif_file: results.sarif

Scheduled full scans

Run a comprehensive scan nightly or weekly on your main branch. This catches issues that incremental scans might miss and gives you trending data.

SAST across different languages

Not all languages are equally easy to analyze statically. This matters when you’re choosing a tool.

Java and C# are probably the best-supported languages across all SAST tools. They’re statically typed, have well-defined security APIs, and the ecosystem of frameworks (Spring, ASP.NET) is well-mapped.

JavaScript and TypeScript are trickier. Dynamic typing, prototype pollution, the callback-heavy nature of Node.js code — all of this makes data-flow analysis harder. But the tooling has gotten much better, especially for TypeScript where the type system gives the analyzer more to work with.

Python falls somewhere in the middle. Dynamic typing is a challenge, but Python’s relatively straightforward control flow helps. Django and Flask have well-understood security patterns.

Go is actually quite nice for static analysis. The language is simple, the type system is strict, and the standard library’s security-relevant functions are well-documented.

C and C++ are where SAST originated and where it arguably matters most. Memory safety bugs, buffer overflows, use-after-free — these are the bugs that lead to CVEs like CVE-2014-0160 (Heartbleed), which was a simple bounds-check failure in OpenSSL.

Common SAST standards and benchmarks

If you’re evaluating SAST tools, you’ll run into these:

  • OWASP Benchmark — An open test suite with thousands of test cases across the OWASP Top 10. It measures true positive rate vs. false positive rate. Worth running, but don’t treat it as the only metric.
  • CWE — The Common Weakness Enumeration. When a SAST tool reports a finding, it should map to a CWE. This is how you get a common language across tools.
  • NIST SARD — The Software Assurance Reference Dataset. A collection of test cases maintained by NIST. Useful for academic evaluation, less so for real-world tool selection.

Getting started with SAST

If you’re not running SAST yet, here’s the practical path:

  1. Pick a tool that supports your primary language. Run it against your main codebase.
  2. Don’t try to fix everything at once. Baseline existing findings and focus on new code.
  3. Integrate into CI/CD. Start with warnings only, then graduate to blocking merges on Critical findings after a few weeks.
  4. Track your fix rate. If findings are piling up faster than you’re fixing them, adjust your thresholds or triage process.

Offensive360’s SAST covers Java, C#, JavaScript, TypeScript, Python, Go, PHP, Ruby, and more out of the box, with AI-powered analysis for languages like Kotlin, Swift, and Dart. If you’re looking for a tool that balances detection accuracy with developer experience, it’s worth checking out.

The bottom line: SAST isn’t a silver bullet. No single tool is. But it’s one of the highest-ROI security practices you can adopt. Fix bugs in code before they ever reach production, and you save yourself the incident response, the customer notification, and the uncomfortable board meeting.

Start scanning. Fix what matters. Iterate.

Written by Offensive360 Security Research Team

Find vulnerabilities before attackers do

Run Offensive360 SAST and DAST against your applications to catch security issues early.