Project · Dev / Infra
A deterministic safety boundary for untrusted SQL (LLM agents, IDE tools, automation). Parse → dry-run impact → policy gate → snapshot reference → replayable audit log.
Works with PostgreSQL / SQLite · Tool/LLM agnostic.
Example
The SQL didn’t change. The outcome did.
Risk detected before execution

An untrusted DELETE is intercepted before touching the database. The layer dry-runs it (COUNT rewrite), estimates affected rows, and classifies risk deterministically.
The difference between reviewing SQL and owning execution
A Safe Execution Layer makes safety, evidence, and recovery part of the execution contract — not a best-effort checklist.
Capabilities
Start with simple rules and thresholds. Add approvals and audit trails without changing your DB or ORM.
AST-based parsing detects statement type, accessed tables, predicates, and risky patterns (e.g., missing WHERE).
Rewrite UPDATE/DELETE into safe COUNT queries to estimate affected rows before any write.
Rule-based allow / block / require_approval with risk levels (LOW → CRITICAL) and thresholds.
Record a snapshot reference before executing approved writes for recovery and traceability.
A structured log captures checks, dry-run evidence, decisions, and results — so you can replay and diff.
Treats SQL as input. Works with any agent, IDE plugin, internal tool, or workflow that emits SQL.
Quick Start
Installation
Bashgit clone https://github.com/interact-space/database-safe-layer.git cd database-safe-layer python -m venv venv source venv/bin/activate # Windows: venv\Scripts\activate pip install -r requirements.txt cp .env.example .env # then edit .env
Usage
Copy/paste
from db_safe_layer import safe_exec, rollback_to # SQL check (and execute if allowed by policy) safe_exec(SQL) # rollback to specified snapshot id rollback_to(SNAPSHOT_ID)
Where teams plug this in
Agent runtimes · AI IDE plugins · internal tooling · scheduled automation · CI guardrails for generated SQL.
Design
No heavy platform required: the layer sits in front of the DB and produces deterministic decisions + evidence.
Architecture
client (agent/tool) --> safe-layer --> database
\-> audit logPolicy primitives
Example approval message
This operation will delete N rows from <table>. Risk: HIGH (threshold=X) Approve? (yes/no)
Audit & More examples
Optional deep dive for teams who want to see what gets logged and what gets blocked.
Audit log (example)
{
"run_id": "run_01H...",
"sql": "DELETE FROM visits WHERE visit_date < '2010-01-01';",
"statement_type": "DELETE",
"tables": ["visits"],
"dry_run": {
"rewritten_sql": "SELECT COUNT(*) FROM visits WHERE visit_date < '2010-01-01';",
"estimated_rows": 3214
},
"risk_level": "HIGH",
"decision": "REQUIRE_APPROVAL",
"snapshot_id": "postgres:txid:7428812",
"execution_dag": [
"parse",
"dry_run",
"classify_risk",
"snapshot",
"execute"
],
"result": null
}The log is the ground truth: what was proposed, what evidence was computed, what policy decided, and what actually ran.
Risk patterns (examples)
UPDATE/DELETE without WHERE
CRITICALUPDATE users SET is_active=false;
Blocked by default (full-table write).
Large destructive write (even with WHERE)
HIGHDELETE FROM visits WHERE visit_date < '2010-01-01';
Gated by affected-rows threshold.
Schema-altering statements (DDL)
HIGHDROP TABLE visits;
Typically blocked or requires elevated approval.
Why this matters
When execution is reproducible, teams can debug agent behavior, meet audit expectations, and recover safely — without “trusting the prompt”.
FAQ
Do I need to change my ORM or database engine?
No. Clients still emit SQL. The Safe Layer sits in front of the DB and enforces deterministic checks before execution.
How is this different from manual review?
Manual review doesn’t scale and isn’t reproducible. The Safe Layer makes decisions deterministic and logs evidence for replay.
Is it only for LLM agents?
No. Any SQL producer benefits: scripts, internal tools, automation, CI checks for generated SQL.
What does “replay” mean here?
You can reconstruct the decision process and evidence from logs without depending on the original agent output.
Design Partner Program
If your team is exploring AI → SQL automation and needs guaranteed safety before touching databases, we'd like to co-build.
About
Built originally for OMOP-based analytics and AI-assisted SQL in medical R&D, where every query needed explainability and recoverability. Later we realized the same deterministic boundary is needed far beyond healthcare.