BETAPlatform actively being built — new topics and features added regularly.

ISTQB Foundation Level (CTFL 4.0.1)~6 min read11/26

Static Analysis

// using tools to detect defects in code and documents before execution.

loading...
// content

Tools can examine millions of lines of code in seconds and find patterns humans consistently miss

A developer reviews 500 lines of code for security vulnerabilities. They spot 3 obvious ones. Meanwhile, a static analysis tool scans the same code in 2 seconds and identifies 11 issues — including 4 SQL injection vulnerabilities hidden in utility functions the reviewer did not even open.

Static analysis uses automated tools to examine code, configurations, or models without executing them. It is a systematic, scalable form of static testing that complements human reviews.

// example: github — static analysis in every pull request

Scenario: GitHub integrates multiple static analysis tools directly into their CI/CD pipeline. Every pull request is automatically scanned before a human reviewer even looks at it. What happened: SonarQube checks for code smells, duplications, and security hotspots. CodeQL performs semantic analysis to find security vulnerabilities like SQL injection and XSS. ESLint enforces JavaScript coding standards. If any tool flags a critical issue, the pull request is blocked from merging automatically — no human review required for that gate. Why it matters: Static analysis catches entire categories of defects before code reaches QA or production. It scales without adding headcount and runs on every change consistently — something human reviews cannot match at speed.

Static Analysis — CTFL 4.0.1

Static analysis is the automated examination of software work products — primarily source code — without executing them. It supports finding defects and assessing quality attributes early in development.

What static analysis tools examine

  • Coding standard violations — naming conventions, formatting rules, banned functions
  • Security vulnerabilities — SQL injection, XSS, buffer overflows, hardcoded credentials
  • Code complexity — cyclomatic complexity, deeply nested logic, long methods
  • Dead code — unreachable code paths, unused variables and imports
  • Duplicate code — copy-paste patterns that create maintenance risk
  • Dependency issues — outdated libraries, vulnerable third-party packages

When static analysis is used

Most effectively used as part of the development pipeline — integrated into the IDE (real-time feedback), triggered on each commit, and enforced as a quality gate in CI/CD. The earlier the feedback, the cheaper the fix.

Limitations

Static analysis cannot find all defects. Runtime errors, timing issues, and user experience problems require dynamic testing. Tools also produce false positives — flagging code that is not actually defective.

// tip: Exam Tip: Static analysis is a form of static testing performed by tools, NOT humans. Reviews are human-led. Static analysis is tool-led. Both are static (no execution required). The exam distinguishes between them — if the question mentions "automated tool" scanning code, the answer is static analysis, not review.

Static Analysis Findings — Examples by Category

Finding CategoryExampleRisk if Ignored
Security vulnerabilityUser input passed directly into SQL query without sanitisationSQL injection attack — database compromised
Coding standard violationFunction is 400 lines long (standard: max 50)Maintainability risk — hard to read, test, and modify
Dead codeAn entire error-handling branch is unreachable due to a logic condition that is always falseMisleading codebase — developers trust code that never runs
High complexityCyclomatic complexity of 35 in a billing calculation function (threshold: 10)High defect probability — complex code is harder to test and maintain
Vulnerable dependencyThird-party library with known CVE (Common Vulnerability Exposure) still in useKnown exploit available — system is vulnerable to published attacks

Security Findings

// Example findings

  • User input passed directly into SQL query without sanitisation

  • Hardcoded API keys or credentials in source code

  • Missing authentication checks on sensitive endpoints

  • Insecure cryptographic algorithms (MD5, SHA1)

// Risk if ignored

SQL injection, data breach, unauthorized access, compliance failure

// Static Analysis vs Reviews vs Dynamic Testing

Performed by

Automated tools

Human reviewers

Testers / automated frameworks

Speed

Seconds — scales to any codebase

Slow — limited by reviewer time

Varies — fast with automation

Finds

Patterns, violations, known vulnerability types

Logic errors, design issues, context-specific problems

Runtime failures, performance, user experience

Misses

Context-dependent logic errors, runtime issues

Inconsistent — depends on reviewer expertise

Defects in untested paths, requirement ambiguities

False positives

Common — tools flag valid code as issues

Rare — humans understand context

Not applicable

False Positive

Tool flags valid code as defective. Example: null-check flagged as "potential NPE" even though it is correct.

True Positive

Tool correctly identifies a real defect. Example: SQL injection vulnerability in unsanitised query.

// Exam tip

Static analysis is tool-led; reviews are human-led. Both are static (no execution). The exam distinguishes them — if the question mentions "automated tool scanning code", the answer is static analysis, not review.

Static Analysis vs Code Reviews vs Dynamic Testing

AspectStatic Analysis (Tools)Code Review (Human)Dynamic Testing
Performed byAutomated toolsHuman reviewersHuman testers / automated test frameworks
SpeedSeconds — scales to any codebaseSlow — limited by reviewer timeVaries — can be fast with automation
FindsPatterns, violations, known vulnerability typesLogic errors, design issues, context-specific problemsRuntime failures, performance, user experience
MissesContext-dependent logic errors, runtime issuesInconsistent — depends on reviewer expertiseDefects in untested paths, requirement ambiguities
False positivesCommon — tools flag valid code as issuesRare — humans understand contextNot applicable
Best forSecurity, standards, complexity at scaleDesign quality, business logic, complex algorithmsVerifying behaviour, performance, integration

// warning: Exam Trap: "Static analysis guarantees code is defect-free." This is false. Static analysis finds specific pattern-based defects — it cannot find all defects. It produces false positives (flagging correct code as defective) and false negatives (missing defects that do not match known patterns). It must be combined with reviews and dynamic testing for comprehensive quality assurance.

Exam Practice Questions

// ctfl 4.0.1 style — select an answer to reveal explanation

4Q
Q1.A tool scans the codebase and flags a function with a cyclomatic complexity score of 45, exceeding the project threshold of 15. This is an example of:
Q2.Which of the following defects can static analysis detect that dynamic testing typically CANNOT?
Q3.A static analysis tool flags a correctly written null-check as a potential NullPointerException risk. This is an example of:
Q4.What is the key difference between static analysis and a code review?
// end