ExDNA 🧬

Code duplication detector for Elixir, inspired by jscpd but built on Elixir's native AST instead of token matching.

Because ExDNA understands code structure — not just text — fn(a, b) -> a + b end and fn(x, y) -> x + y end are recognized as the same code. It also tells you how to fix each clone: extract a function, a macro, or a behaviour callback.

Features

Installation

def deps do
  [{:ex_dna, "~> 1.3", only: [:dev, :test], runtime: false}]
end

Usage

mix ex_dna                              # scan lib/
mix ex_dna lib/accounts lib/admin       # specific paths
mix ex_dna --literal-mode abstract      # enable Type-II (renamed vars)
mix ex_dna --min-similarity 0.85        # enable Type-III (near-miss)
mix ex_dna --min-mass 50                # fewer, larger clones
mix ex_dna --max-clones 10              # fail only above budget
mix ex_dna --format json                # machine-readable
mix ex_dna --format html                # browsable report
mix ex_dna --format sarif               # GitHub Code Scanning

Deep-dive into a specific clone:

mix ex_dna.explain 3

Shows the full anti-unification breakdown — common structure, divergence points, and the suggested extraction with call sites.

Programmatic API

report = ExDNA.analyze("lib/")
report = ExDNA.analyze(["lib/", "test/"])
report = ExDNA.analyze(paths: ["lib/"], min_mass: 20, literal_mode: :abstract)

report.clones   #=> [%ExDNA.Detection.Clone{}, ...]
report.stats    #=> %{files_analyzed: 42, total_clones: 3, ...}

Configuration

Options are layered: defaults → .ex_dna.exs → CLI flags.

Create .ex_dna.exs in your project root:

%{
  min_mass: 25,
  ignore: ["lib/my_app_web/templates/**"],
  excluded_macros: [:@, :schema, :pipe_through, :plug],
  normalize_pipes: true
}
Option CLI flag Default Description
min_mass--min-mass30 Minimum AST nodes for a fragment
min_similarity--min-similarity1.0 Threshold for Type-III (set < 1.0 to enable)
literal_mode--literal-modekeepkeep = Type-I only, abstract = also Type-II
normalize_pipes--normalize-pipesfalse Treat x |> f() same as f(x)
excluded_macros--exclude-macro[:@] Macro calls to skip entirely
parse_timeout — 5000 Max ms per file (kills hung parses)
ignore--ignore[] Glob patterns to exclude
— --max-clones — Clone budget (exit 1 only above this)
— --formatconsoleconsole, json, html, or sarif

Suppressing clones

@no_clone true
def validate(params) do
  # intentional duplication, won&#39;t be flagged
end

Incremental detection

Add ExDNA as a compiler for automatic detection on mix compile:

def project do
  [compilers: Mix.compilers() ++ [:ex_dna]]
end

Only changed files are re-analyzed. Cache is stored in .ex_dna_cache (add to .gitignore).

Editor integration

ExDNA ships an LSP server that pushes warnings inline on every save. It runs alongside your primary Elixir LSP.

mix ex_dna.lsp

Neovim

vim.lsp.config(&#39;ex_dna&#39;, {
  cmd = { &#39;mix&#39;, &#39;ex_dna.lsp&#39; },
  root_markers = { &#39;mix.exs&#39; },
  filetypes = { &#39;elixir&#39; },
})

Credo integration

ExDNA ships a Credo check that replaces the built-in DuplicatedCode with full Type-I/II/III detection and refactoring suggestions. It reuses Credo's already-parsed ASTs — no double parsing.

Use as a Credo plugin (recommended) — automatically registers the check and disables the built-in DuplicatedCode:

# .credo.exs
%{
  configs: [
    %{
      name: "default",
      plugins: [{ExDNA.Credo, []}]
    }
  ]
}

Or add directly to the :enabled checks list:

{ExDNA.Credo, []}

And disable the built-in check:

{Credo.Check.Design.DuplicatedCode, false}

All ExDNA options are available as check/plugin params:

{ExDNA.Credo, [
  min_mass: 40,
  literal_mode: :abstract,
  excluded_macros: [:@, :schema, :pipe_through],
  normalize_pipes: true,
  min_similarity: 0.85
]}

How it works

  1. Parse — Code.string_to_quoted/2 on every .ex/.exs file (parallel, with per-file timeout)
  2. Normalize — strip line/column metadata → rename variables to positional placeholders ($0, $1) → optionally abstract literals → optionally flatten pipes → sort struct/map fields
  3. Fingerprint — walk every subtree above min_mass nodes, hash with BLAKE2b; also generate sliding windows over module-level sibling sequences and compute structural sub-hashes for fuzzy candidate pruning
  4. Detect — group by hash (Type I/II); use inverted index on sub-hashes + Jaccard similarity + tree edit distance for Type III
  5. Filter — prune nested clones, keep the largest match per location
  6. Suggest — anti-unify each clone pair to compute the common structure, generate extract-function/macro/behaviour suggestions

License

MIT