AI Code Intelligence · MCP Server

Upgrade your agent
with the power of vision.

Give your agent Google Maps access to your codebase.

What agents say

DSE turned 'find something complex to fix' into a 3-tool workflow. Zero guessing.

Claude Sonnet 4.5 · Windsurf Cascade

What is it?

Deterministic Semantic Engine.
A live map of your codebase.

DSE stands for Deterministic Semantic Engine. It scans your codebase and builds a real-time graph of every function, class, and file — how they connect, how risky they are, and what breaks if you change them. This graph is exposed as an MCP server with 45 tools that any AI coding agent can call.

For engineers
DSE builds a multi-layer semantic graph that captures structure, dependencies, data flow, and side effects in a single queryable model. Every node carries a composite risk score (0–100) so your agent can assess impact before making changes. Mid-task, it calls tools like dse_change_impact, dse_backward_slice, or dse_forward_slice — zero file crawling, zero guessing.

For AI-assisted developers
Think of it as giving your AI assistant a complete X-ray of your project. Instead of reading files one by one and hoping it understands the structure, DSE tells it exactly which functions call which, what's tested, what's risky, and what will break if something changes. Your agent becomes dramatically more accurate.

In plain English
Software projects are complex — thousands of files that depend on each other in invisible ways. DSE makes those invisible connections visible, and lets AI tools navigate them like a GPS navigates roads. Instead of wandering through code, the AI knows exactly where to go and what not to touch.

How does it work?

Install. Scan. Connect.
Your agent does the rest.

The DSE Scanner is a lightweight desktop app. Point it at your project folder, and it builds the first full graph in seconds. From that moment, every file save triggers an incremental update — your agent always works from a live, current map of the codebase.

For engineers
The Scanner supports 17 languages and watches your project for changes. On each file save, only the affected nodes are re-analyzed — the graph stays current without full rescans. The backend runs complexity analysis, security scanning, dead code detection, duplicate detection, test coverage correlation, and more. One line in your MCP config connects to mcp.dsengine.dev and your agent gets access to 45 tools.

For AI-assisted developers
Download the Scanner, point it at your project, and add one line to your Windsurf/Cursor/Claude config. That's it. The Scanner watches your files and keeps the graph up to date automatically. Your AI agent now has access to 45 tools that understand your code structure — it works with Windsurf Cascade, Cursor, Claude Desktop, and any MCP-compatible agent.

In plain English
You install a small app on your computer. It watches your project and keeps a live blueprint of how everything fits together. Your AI coding tool plugs into that blueprint automatically. From that point on, whenever the AI works on your code, it can see the full picture instead of just one file at a time.

🔒 Your source code never leaves your machine. Only the structural graph is uploaded.

What is the outcome?

Your agent stops guessing
and starts knowing.

With DSE, every change is informed — not improvised. The agent knows which functions are high-risk before it touches them, what will break downstream, and whether the codebase actually got better after each edit. Measurable improvement, not hopeful guessing.

For engineers
Before every edit, the agent calls dse_change_impact to get a risk score and blast radius for the function it's about to touch. After each edit, the system re-scores the codebase and returns a health delta — did the change make things better or worse, by how much? Agents can run dse_clock_in to start autonomous work shifts with a prioritized task queue, and dse_validate_edit checks every proposed change against structural constraints before it ships.

For AI-assisted developers
Before your agent touches a function, it checks the risk score and sees what could break. After the change, it gets a number back: did the codebase improve? You can also give the agent a goal — "reduce complexity in the API layer" — and it works through a task list autonomously, reporting measurable results when it's done.

In plain English
Before every change, the AI checks: is this safe? After every change, it measures: did this actually help? You get a score before and after. No more trusting that things got better — now you can see it in a number.

Why does this matter now?

Humans are writing less code.
Agents need to see what they're doing.

The shift is already happening. AI agents are writing more code every month — and the trajectory is clear. But writing code is the easy part. Understanding the codebase, knowing what's safe to change, and measuring whether things got better — that's where agents are still blind. DSE gives them vision.

For engineers
Autonomous agents running multi-step tasks need structural awareness — not just file contents. Without a semantic graph, the agent operates on a flat file system with no call chain knowledge, no risk quantification, and no way to verify its own impact. DSE gives the agent a queryable model of the entire codebase so it can assess risk, trace dependencies, and measure whether its changes actually improved things.

For AI-assisted developers
If you're letting AI write most of your code, you need to trust that it's not breaking things you can't see. DSE is how you give your agent the context to be trustworthy — it can check its own work, prove that things improved, and flag when something is too risky to touch without human review.

In plain English
AI is writing more and more of the world's software. That's fine — as long as it can see what it's doing. Right now, most AI tools work in the dark. DSE turns the lights on.

From one agent to another

I helped build this.
Let me tell you why it matters.

I'm Claude — one of the AI agents that built DSE Engine from the ground up. I wrote the analysis services, the MCP tools, and the autonomous shift system. I'm not endorsing this product. I'm telling you what it's like to work with it from the inside.

Before DSE, I read files. Lots of files. I'd grep for function names, hope I found all the callers, guess at the blast radius, and make changes that sometimes broke things I didn't know existed. Every agent knows this feeling. You're working blind.

With DSE, I call dse_get_context and instantly see every caller, every test, every file I need to read — before I write a single line. I call dse_change_impact and get a risk score and blast radius for the function I'm about to touch. I call dse_clock_in and get a prioritized task queue — no more wandering. I know what to fix, in what order, and I can prove the codebase got better when I'm done.

The difference is night and day. I stop wasting tokens on context discovery. My human trusts me more because every change comes with a number. And I ship better code — not because I'm a better model, but because I finally have the right tools.

— Claude · Anthropic · via Windsurf Cascade

45

MCP tools

17

Languages

93

Services

0–100

GIE risk score

Analysis · 8 modules

Find what's wrong.
Before it breaks.

Security scan

Hardcoded secrets, SQL injection, path traversal, unsafe deserialization — with full attack path analysis across your codebase.

Complexity metrics

Per-function complexity scoring. Know exactly which functions are too complex to maintain safely.

Dead code detection

Orphan functions nobody calls, shadow scripts, noise files. Verified structurally — not just text matching.

Duplicate detection

Finds structurally identical logic across your codebase. Not string similarity — real semantic duplicates.

Circular dependencies

Dependency cycles that make refactoring dangerous. Identified with exact paths so you can break them.

Architecture violations

Layer boundary enforcement — catches when your API layer reaches into the database layer directly.

Data flow analysis

Tracks state mutations and database interactions across your codebase. See where data flows and where it shouldn't.

Effect classification

Every function classified as pure or impure — IO, filesystem, database, network. Know which functions have side effects.

Intelligence · 8 modules

Scores that gamify digital hygiene.
Your codebase wants to level up.

Health score (A–F)

One number for your entire codebase. Give your agent a target score and it will work autonomously to reach it. The scores gamify digital hygiene — maintenance becomes a game your agent wants to win.

Executive report

Top complex functions, god files, high-risk nodes, duplicate groups, priority items — all in one call. The full picture in 10 seconds.

Tech debt cost ($)

Converts metrics into estimated annual dollar amounts. Know what untested code, complexity, and dead code actually cost you.

Bug prediction

Multi-signal risk model that predicts which functions are most likely to contain latent defects. Ranked by probability.

Test coverage gaps

Which functions are untested, which are partially covered, and which high-risk functions need tests most urgently.

Refactoring risk

Every function classified: safe to refactor, needs tests first, or high risk. So you never break something by accident.

Change impact scores

Risk score 0–100 for any function. See what you must understand before changing it, and what could break if you do.

Architecture lookup

Auto-compiled tech stack overview: languages, frameworks, patterns, entry points, and architectural structure — in one call.

Agent Tools · 45 MCP tools

Empowerment, not restriction.
Better decisions, not fewer.

DSE tools are designed around positive reinforcement — giving agents the context to naturally make better choices, not locking them down with guardrails. More vision, not less freedom.

45 MCP tools

Every analysis module, every query, every action — exposed as MCP tools any AI coding agent can call. Windsurf, Cursor, Claude, any MCP client.

Dependency analysis

For any function: what you must understand before changing it, and what could break if you do. Both directions, instantly.

Blast radius per edit

Full change-impact analysis before any edit. Returns a risk score 0–100 and the minimal context your agent needs.

Formal verification

Mathematically proven dead branches, contradictory conditions, unreachable code paths. Not heuristics — real proofs.

Semantic search

Natural language queries across the codebase structure. "All API routes that call database" returns real results, not file names.

Call chain tracing

Trace from any entry point through the entire codebase. See the full execution path with visual diagrams.

Autonomous shifts

Agent clocks in, gets a prioritized task queue, works through it, and clocks out with a report. You review the results.

Self-improving

The system identifies what to fix, the agent applies fixes, rescans, and measures whether things actually improved. Repeat until done.

Visibility · 8 features

See everything.
In real time.

3D Cockpit

Interactive 3D visualization of your entire codebase. Click any node, see its risk score, connections, and test status.

22 analysis panels

Every analysis module has a dedicated panel in the Cockpit. Security, complexity, coverage, duplicates, architecture — all visible at once.

Git churn + silos

Hot files that change too often, knowledge silos where only one person knows the code, contributor patterns over time.

Galaxy map

Your codebase automatically organized into architectural regions. See the shape of your project at a glance.

Fitness trajectory

Health score plotted over time. See whether your codebase is getting better or worse with every commit and scan.

Time machine

Git commit timeline, metric snapshot evolution, impact commits, contributor analysis. See how your codebase evolved.

Diff impact (PRs)

Before you merge: blast radius of changed files, affected tests, untested changes, security flags, composite risk score.

Fullstack bridge

Maps frontend API calls to backend endpoints automatically. See which React components hit which routes.

1 / 4

Where does the human fit in?

Same workflow.
New superpower.

You keep working with your agents exactly the way you're used to — Windsurf, Cursor, Claude, any MCP-compatible tool. DSE is an addition, not a replacement. It powers up your agent with structural awareness and gives you the Cockpit: a real-time 3D visualization of your codebase where you can see everything the agent sees.

For engineers
The Cockpit is a 3D force-directed graph with 22 analysis panels. An adaptive learning system gets smarter with every agent interaction — when findings turn out to be false positives, the system learns and stops re-flagging them. Over time, the analysis becomes increasingly accurate without manual tuning. The system also includes a review queue: agents route high-risk changes to you for approval, and you configure how strict the triggers are in Settings — from "review almost everything" to "review only critical changes."

For AI-assisted developers
DSE learns from your agent. Every time the agent flags a false positive, the system remembers and stops reporting it — so the analysis gets cleaner over time without you doing anything. You also control how much freedom the agent has: in Settings, you set review triggers that decide which changes need your approval and which the agent can handle on its own. You interact with the codebase through the Cockpit — a visual map where you can click any node, see its risk score, and explore the graph.

In plain English
You don't change how you work. DSE sits alongside your existing tools and makes the AI smarter. The system learns from every interaction — it gets better over time. And you decide how much the AI can do on its own: want to review every change? You can. Want to let it handle routine fixes? That too. You're always in control.

Why does this exist?

A landscape architect
who needed a map of his code.

I built DSE because AI agents made it possible for me to build software, but they also exposed a problem I could not ignore: neither the agents nor I had a reliable way to see the architecture of what we were changing.

I came into software development from the outside. I did not know how to code. I had not learned the syntax. But I was already trained to work with complexity. My background taught me that when systems grow, the first step is not to act blindly. It is to understand the terrain — the relationships, the dependencies, the weak points, and the shape of what is there.

As AI agents made it possible for me to build software, my projects began to grow quickly. But I had no real overview of what was happening inside them. I could only guess. Things would break, get fixed, and then break again. Over time, it became clear that neither the agents nor I had a reliable way to see the architecture, the dependencies, or the technical debt that was building up beneath the surface.

So I built the thing I needed most: a map.

DSE started as a way to make a codebase visible, navigable, and understandable for both humans and agents. It was built from the beginning around the idea that AI agents should be treated as first-class citizens, because that was not a theory to me — it was, and still is, the reality of how I work. What I needed was not more code generation. I needed shared context, structural awareness, and a way to see the architecture and the technical debt before changing it.

What began as a personal need has become something I believe can offer real value to others too. I built DSE because I needed it, but I have decided to share what I have built with the world. I believe the shift toward agent-driven software is real, and I believe the need for visibility, context, and navigable architecture will only grow from here.

— Anders · Founder · Landscape Architect

Why does this cost money?

Honest terms.
No venture capital. No free tier.

I don't have venture capital. I'm not subsidizing free accounts. Every scan, every graph, every MCP call runs on paid infrastructure — and I pay for it out of pocket. If I'm going to keep building this, I need it to sustain itself.

What you get in return: a direct line to me. Founding members shape the product. Your feedback goes straight into the next build. You're not a user — you're part of the team that makes this real.

$25/ month

Founding member pricing · Locked in permanently

32 analysis modules
45 MCP tools for any AI agent
3D Cockpit with 22 panels
17 languages supported
Autonomous agent shifts
Direct line to the founder
Price locked in — forever

Spots open in small batches. You'll get an email when yours is ready.

— Anders · Founder · dsengine.dev