Skip to content

complexipy

An extremely fast Python library to calculate the cognitive complexity of Python files, written in Rust.

What is Cognitive Complexity?

Cognitive Complexity breaks from using mathematical models to assess software maintainability by combining Cyclomatic Complexity precedents with human assessment. It yields method complexity scores that align well with how developers perceive maintainability.

Unlike traditional complexity metrics, cognitive complexity focuses on how difficult code is to understand by humans, making it more relevant for maintaining and reviewing code.

Key benefits: - Identifies hard-to-understand code sections - Helps improve code quality and maintainability - Provides a more intuitive metric than traditional complexity measures

πŸ“„ Read the white paper: Cognitive Complexity, a new way of measuring understandability

Documentation

Documentation: https://rohaquinlop.github.io/complexipy/

Source Code: https://github.com/rohaquinlop/complexipy

PyPI: https://pypi.org/project/complexipy/

Requirements

  • Python >= 3.8
  • Git (optional) - required only if you want to analyze a git repository

Installation

pip install complexipy

Usage

Command Line Interface

# Analyze the current directory (recursively)
complexipy .

# Analyze a specific directory (recursively)
complexipy path/to/directory

# Analyze a remote Git repository
complexipy https://github.com/user/repo.git

# Analyze a single file
complexipy path/to/file.py

# Suppress console output
complexipy path/to/directory --quiet      # or -q

# List every function, ignoring the 15-point complexity threshold
complexipy path/to/file.py --ignore-complexity   # or -i

# Show only files / functions whose complexity exceeds the threshold
complexipy path/to/directory --details low       # or -d low

# Sort results (asc: ascending complexity, desc: descending complexity, name: A→Z)
complexipy path/to/directory --sort desc         # or -s desc

# Save results
complexipy path/to/directory --output-csv        # -c, writes complexipy.csv
complexipy path/to/directory --output-json       # -j, writes complexipy.json

Command-line options

Short Long Parameters Description Default
-c --output-csv – Write the report to complexipy.csv in the current working directory. false
-j --output-json – Write the report to complexipy.json in the current working directory. false
-i --ignore-complexity – Do not stop with an error when a function's cognitive complexity is > 15. All functions are still listed in the output. off
-d --details <normal∣low> required Control the verbosity of the output.
β€’ normal – show every file and function (default)
β€’ low – show only entries that exceed the complexity threshold
normal
-q --quiet – Suppress console output. Exit codes are still returned. false
-s --sort <asc∣desc∣name> required Order the results.
β€’ asc – complexity ascending (default)
β€’ desc – complexity descending
β€’ name – alphabetical Aβ†’Z
asc

Note The CLI exits with code 1 when at least one function exceeds the threshold of 15 points. Pass --ignore-complexity (-i) to disable this behaviour.

GitHub Action

You can use complexipy as a GitHub Action to automatically check code complexity in your CI/CD pipeline:

name: Check Code Complexity
on: [push, pull_request]

jobs:
  complexity:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4
    - name: Check Python Code Complexity
      uses: rohaquinlop/complexipy-action@v2
      with:
        paths: .  # Analyze the entire repository

Action Inputs

Input Type / Allowed Values Required
paths string (single path or list of paths) Yes
quiet boolean No
ignore_complexity boolean No
details normal, low No
sort asc, desc, name No
output_csv boolean No
output_json boolean No

Examples

Basic Usage:

- uses: rohaquinlop/complexipy-action@v1
  with:
    paths: |
      .
      project_path

Generate CSV Report:

- uses: rohaquinlop/complexipy-action@v1
  with:
    paths: .
    output_csv: true

Generate JSON Report:

- uses: rohaquinlop/complexipy-action@v1
  with:
    paths: .
    output_json: true

Analyze Specific Directory with Low Detail Output:

- uses: rohaquinlop/complexipy-action@v1
  with:
    paths: ./src/python
    details: low
    sort: desc

Pre-commit Hook

You can use complexipy as a pre-commit hook to automatically check code complexity before each commit. This helps maintain code quality by preventing complex code from being committed.

To use complexipy with pre-commit, add the following to your .pre-commit-config.yaml:

repos:
- repo: https://github.com/rohaquinlop/complexipy-pre-commit
  rev: v3.0.0  # Use the latest version
  hooks:
    - id: complexipy

The pre-commit hook will: - Run automatically before each commit - Check the cognitive complexity of your Python files - Prevent commits if any function exceeds the complexity threshold - Help maintain code quality standards in your repository

VSCode Extension

You can also use complexipy directly in Visual Studio Code through our official extension:

  1. Open VS Code
  2. Go to the Extensions view (Ctrl+Shift+X / Cmd+Shift+X)
  3. Search for "complexipy"
  4. Click Install

The extension provides: - Real-time complexity analysis as you type - Visual complexity indicators: - Function complexity shown with Ζ’ symbol - Line-level complexity shown with + symbol - Color-coded indicators: - Green: Low complexity (functions ≀ 15, lines ≀ 5) - Red: High complexity (functions > 15, lines > 5) - Automatic updates on: - File save - Active editor change - Text changes

You can also trigger a manual analysis by: 1. Opening the Command Palette (Ctrl+Shift+P / Cmd+Shift+P) 2. Typing "complexipy" 3. Selecting the "complexipy" command

Python API

Complexipy can also be used directly from your Python code. The high-level helper functions below wrap the Rust core and return lightweight Python classes that behave like regular dataclasses.

  • complexipy.file_complexity(path: str) -> FileComplexity – analyse a Python file on disk.
  • complexipy.code_complexity(src: str) -> CodeComplexity – analyse a string that contains Python source.

Both helpers return objects whose public attributes you can freely access:

FileComplexity
β”œβ”€ path: str                   # Relative path of the analysed file
β”œβ”€ file_name: str              # Filename without the directory part
β”œβ”€ complexity: int             # Cognitive complexity of the whole file
└─ functions: List[FunctionComplexity]

FunctionComplexity
β”œβ”€ name: str
β”œβ”€ complexity: int
β”œβ”€ line_start: int
β”œβ”€ line_end: int
└─ line_complexities: List[LineComplexity]

LineComplexity
β”œβ”€ line: int
└─ complexity: int

Quick-start

from complexipy import file_complexity, code_complexity

# Analyse a file
fc = file_complexity("path/to/your/file.py")
print(f"Total file complexity: {fc.complexity}")

for fn in fc.functions:
    print(f"{fn.name}:{fn.line_start}-{fn.line_end} β†’ {fn.complexity}")

# Analyse an in-memory snippet
snippet = """
def example_function(x):
    if x > 0:
        for i in range(x):
            print(i)
"""
cc = code_complexity(snippet)
print(f"Snippet complexity: {cc.complexity}")

End-to-End Example

The following walk-through shows how to use Complexipy from both the command line and the Python API, how to interpret the scores it returns, and how to save them for later use.

1. Prepare a sample file

Create example.py with two simple functions:

def a_decorator(a, b):
    def inner(func):
        return func
    return inner


def b_decorator(a, b):
    def inner(func):
        if func:
            return None
        return func
    return inner

2. Run the CLI

Analyse the file from your terminal:

complexipy example.py

Typical output (shortened):

───────────────────────────── πŸ™ complexipy 3.2.0 ──────────────────────────────
                                    Summary
      ┏━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
      ┃ Path              ┃ File              ┃ Function    ┃ Complexity ┃
      ┑━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
      β”‚ test_decorator.py β”‚ test_decorator.py β”‚ a_decorator β”‚ 0          β”‚
      β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
      β”‚ test_decorator.py β”‚ test_decorator.py β”‚ b_decorator β”‚ 1          β”‚
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
🧠 Total Cognitive Complexity: 1
1 file analyzed in 0.0092 seconds
────────────────────────── πŸŽ‰ Analysis completed! πŸŽ‰ ───────────────────────────

What do those columns mean?

  • Path / File – location of the analysed source file
  • Function – function or method name that was measured
  • Complexity – the cognitive complexity score of that function (lower is better)

3. Use the Python API

from complexipy import file_complexity, code_complexity

# Analyse the file on disk
fc = file_complexity("example.py")
print(fc.complexity)   # β†’ 1

# Analyse an in-memory snippet
snippet = "for x in range(10):\n    print(x)"
cc = code_complexity(snippet)
print(cc.complexity)   # β†’ 1

4. Why is the score 1?

def b_decorator(a, b):  # 0
  def inner(func):      # 0
    if func:            # +1 – decision point
      return None       # 0
    return func         # 0
  return inner          # 0

Only a single if branch is encountered, therefore the file's total complexity is 1.

5. Persisting the results

  • CSV – complexipy example.py -c β†’ creates complexipy.csv
  • JSON – complexipy example.py -j β†’ creates complexipy.json

6. Scaling up your analysis

  • Entire folder (recursively): complexipy .
  • Specific directory: complexipy ~/projects/my_app
  • Remote Git repository:
    complexipy https://github.com/rohaquinlop/complexipy          # print to screen
    complexipy https://github.com/rohaquinlop/complexipy -c       # save as CSV
    

Contributors

Made with contributors-img

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • Thanks to G. Ann Campbell for publishing the paper "Cognitive Complexity a new way to measure understandability".
  • This project is inspired by the Sonar way to calculate cognitive complexity.

References