Tools can examine millions of lines of code in seconds and find patterns humans consistently miss
A developer reviews 500 lines of code for security vulnerabilities. They spot 3 obvious ones. Meanwhile, a static analysis tool scans the same code in 2 seconds and identifies 11 issues — including 4 SQL injection vulnerabilities hidden in utility functions the reviewer did not even open.
Static analysis uses automated tools to examine code, configurations, or models without executing them. It is a systematic, scalable form of static testing that complements human reviews.
// example: github — static analysis in every pull request
Static Analysis — CTFL 4.0.1
Static analysis is the automated examination of software work products — primarily source code — without executing them. It supports finding defects and assessing quality attributes early in development.
What static analysis tools examine
- Coding standard violations — naming conventions, formatting rules, banned functions
- Security vulnerabilities — SQL injection, XSS, buffer overflows, hardcoded credentials
- Code complexity — cyclomatic complexity, deeply nested logic, long methods
- Dead code — unreachable code paths, unused variables and imports
- Duplicate code — copy-paste patterns that create maintenance risk
- Dependency issues — outdated libraries, vulnerable third-party packages
When static analysis is used
Most effectively used as part of the development pipeline — integrated into the IDE (real-time feedback), triggered on each commit, and enforced as a quality gate in CI/CD. The earlier the feedback, the cheaper the fix.
Limitations
Static analysis cannot find all defects. Runtime errors, timing issues, and user experience problems require dynamic testing. Tools also produce false positives — flagging code that is not actually defective.
// tip: Exam Tip: Static analysis is a form of static testing performed by tools, NOT humans. Reviews are human-led. Static analysis is tool-led. Both are static (no execution required). The exam distinguishes between them — if the question mentions "automated tool" scanning code, the answer is static analysis, not review.
Static Analysis Findings — Examples by Category
| Finding Category | Example | Risk if Ignored |
|---|---|---|
| Security vulnerability | User input passed directly into SQL query without sanitisation | SQL injection attack — database compromised |
| Coding standard violation | Function is 400 lines long (standard: max 50) | Maintainability risk — hard to read, test, and modify |
| Dead code | An entire error-handling branch is unreachable due to a logic condition that is always false | Misleading codebase — developers trust code that never runs |
| High complexity | Cyclomatic complexity of 35 in a billing calculation function (threshold: 10) | High defect probability — complex code is harder to test and maintain |
| Vulnerable dependency | Third-party library with known CVE (Common Vulnerability Exposure) still in use | Known exploit available — system is vulnerable to published attacks |
Security Findings
// Example findings
User input passed directly into SQL query without sanitisation
Hardcoded API keys or credentials in source code
Missing authentication checks on sensitive endpoints
Insecure cryptographic algorithms (MD5, SHA1)
// Risk if ignored
SQL injection, data breach, unauthorized access, compliance failure
// Static Analysis vs Reviews vs Dynamic Testing
Performed by
Automated tools
Human reviewers
Testers / automated frameworks
Speed
Seconds — scales to any codebase
Slow — limited by reviewer time
Varies — fast with automation
Finds
Patterns, violations, known vulnerability types
Logic errors, design issues, context-specific problems
Runtime failures, performance, user experience
Misses
Context-dependent logic errors, runtime issues
Inconsistent — depends on reviewer expertise
Defects in untested paths, requirement ambiguities
False positives
Common — tools flag valid code as issues
Rare — humans understand context
Not applicable
False Positive
Tool flags valid code as defective. Example: null-check flagged as "potential NPE" even though it is correct.
True Positive
Tool correctly identifies a real defect. Example: SQL injection vulnerability in unsanitised query.
// Exam tip
Static analysis is tool-led; reviews are human-led. Both are static (no execution). The exam distinguishes them — if the question mentions "automated tool scanning code", the answer is static analysis, not review.
Static Analysis vs Code Reviews vs Dynamic Testing
| Aspect | Static Analysis (Tools) | Code Review (Human) | Dynamic Testing |
|---|---|---|---|
| Performed by | Automated tools | Human reviewers | Human testers / automated test frameworks |
| Speed | Seconds — scales to any codebase | Slow — limited by reviewer time | Varies — can be fast with automation |
| Finds | Patterns, violations, known vulnerability types | Logic errors, design issues, context-specific problems | Runtime failures, performance, user experience |
| Misses | Context-dependent logic errors, runtime issues | Inconsistent — depends on reviewer expertise | Defects in untested paths, requirement ambiguities |
| False positives | Common — tools flag valid code as issues | Rare — humans understand context | Not applicable |
| Best for | Security, standards, complexity at scale | Design quality, business logic, complex algorithms | Verifying behaviour, performance, integration |
// warning: Exam Trap: "Static analysis guarantees code is defect-free." This is false. Static analysis finds specific pattern-based defects — it cannot find all defects. It produces false positives (flagging correct code as defective) and false negatives (missing defects that do not match known patterns). It must be combined with reviews and dynamic testing for comprehensive quality assurance.
Exam Practice Questions
// ctfl 4.0.1 style — select an answer to reveal explanation