Live Demos
Five scenarios. Copy-paste commands. See exactly what Makoto does — and what happens without it.
Setup Guide →The Poisoned Pipeline
Your IoT pipeline ingests partner CSVs. Without a DBOM gate, SQL injection payloads and impossible readings flow straight to analytics.
- Corrupted CSV silently accepted (SQL injection, NaN, impossible sensor readings)
- DBOM-less file rejected instantly at the pipeline gate
- Hash-verified, signed file accepted with full provenance
The Reproducibility Gap
A researcher cites experiment_v2.csv but can't explain which raw data version, outlier method, or processing steps were used. Paper rejected.
- A mystery CSV with four unanswerable reviewer questions
- Full 3-step lineage chain: instrument → outlier removal → normalization
- Hash chain verified step-by-step — reviewer satisfied
GitHub Action — Provenance on Every Release
Every data file change triggers automatic DBOM generation. No manual steps, no forgotten provenance. A real, usable GitHub Action.
- DBOM JSON generated for any file in one command
- SHA-256 hash, signer identity, and lineage auto-populated
- 5-line workflow file ready to drop into .github/workflows/
Configuration Incident Post-Mortem
Someone changes max_batch_size at 3am. Throughput drops 95%. Without an audit trail, the team burns hours investigating.
- Config changed at 3am — no trail, hours of forensics
- Same change with DBOM: signer, reason, old value — all instant
- Mean time to resolution: 2 hours → 2 minutes
AI Dataset Verification
Your ML team downloads a training dataset from a shared drive. A single poisoned label can compromise model behavior — and you'd never know.
- Training dataset loaded on blind trust — label poisoning invisible
- SHA-256 verified, signer confirmed, 2-step lineage shown
- One changed label → hash mismatch → training blocked immediately