How do you know your AI provider hasn't silently changed the model — or the MCP server — you approved?
Behavioural fingerprinting for AI models and MCP servers. Detect silent model swaps, verify deployment integrity, and stream verification events to your SIEM — all from a single binary.
Organisations deploying AI face critical integrity challenges that can lead to compliance violations, security breaches, and operational failures:
Multi-dimensional behavioural fingerprinting with enterprise SIEM integration and MCP server verification
Creates unique DNA signatures by analysing the construction and characteristics of your model and MCP server across 60+ probe interactions.
Compares live model outputs across multiple verification dimensions with configurable weights and thresholds for precise match scoring.
Fingerprints Model Context Protocol servers via JSON-RPC 2.0, capturing multiple verification dimensions for comprehensive server integrity analysis.
Every fingerprint includes a SHA256 hash for tamper-proof verification. Immutable audit trails prove exactly which model version was deployed.
Eight domain-specific genome profiles — general, medical, finance, energy, legal, cybersecurity, education, and retail — plus support for custom profile files.
Stream verification events to Splunk, Azure Sentinel, Datadog, Elasticsearch, Grafana Loki, or any webhook endpoint with retry, backoff, and mTLS.
Fingerprint and verify models from any major provider or local deployment
Each genome profile contains 55 expert-crafted analysis sequences across domain subcategories, producing far more discriminating fingerprints than generic probing
-promptfile.
Three-step workflow: fingerprint, verify, alert
Probe your approved AI model 60+ times to build a behavioural DNA baseline. DNAI analyses the construction and characteristics of your model and MCP server to create a unique cryptographic fingerprint.
Compare live model outputs against your stored baseline. Multi-dimensional matching detects silent model swaps, version changes, parameter modifications, and behavioural drift.
Stream structured verification events to your SIEM platform. Anomalies trigger immediate alerts with detailed match scores for your security and compliance teams.
Maintain AI integrity and compliance across your organisation
Verify medical AI assistants use only TGA-approved model versions with complete audit trails for patient safety and regulatory compliance.
Ensure AI-powered risk assessment and trading systems use approved models. Detect unauthorised changes that could impact financial decisions.
Maintain chain of custody for AI-generated legal documents, proving specific model versions were used to meet court and regulatory requirements.
Prevent shadow AI usage and ensure all departments use security-approved models with centralised governance and policy enforcement.
Track model versions used in experiments for reproducibility. Ensure research integrity with accurate documentation of AI methodologies.
Integrate fingerprint verification into deployment pipelines. Automatically gate releases when AI model integrity checks fail.
Stream structured verification events directly to your security operations platform with retry, backoff, and mTLS support
Built with security, scalability, and compliance at its core
Get started with DNAI in minutes. Fingerprint, verify, and monitor your AI models from the command line.
Probe your AI provider 60 times to build a behavioural DNA baseline. The fingerprint captures the construction and characteristics of your model across multiple verification dimensions.
# Set your API key
export OPENAI_API_KEY="sk-..."
# Fingerprint GPT-4o with 60 probes
dnai -provider openai \
-model gpt-4o \
-analysegenome 60 \
-savefp gpt4o-baseline.json \
-license-key YOUR_LICENSE_KEY
# Set your API key
export ANTHROPIC_API_KEY="sk-ant-..."
# Fingerprint Claude Sonnet
dnai -provider anthropic \
-model claude-sonnet-4-20250514 \
-analysegenome 60 \
-savefp sonnet-baseline.json \
-license-key YOUR_LICENSE_KEY
# Set your API key
export GOOGLE_API_KEY="AIza..."
# Fingerprint Gemini 1.5 Pro
dnai -provider gemini \
-model gemini-1.5-pro \
-analysegenome 60 \
-savefp gemini-baseline.json \
-license-key YOUR_LICENSE_KEY
# Set your Azure API key
export OPENAI_API_KEY="your-azure-key"
# Fingerprint Azure-hosted deployment
dnai -provider openai_compat \
-endpoint "https://myorg.openai.azure.com/openai/deployments/gpt4o/chat/completions?api-version=2024-02-15-preview" \
-model gpt-4o \
-azure \
-analysegenome 60 \
-savefp azure-gpt4o-baseline.json \
-license-key YOUR_LICENSE_KEY
# No API key needed for local models
dnai -provider openai_compat \
-endpoint "http://localhost:11434/v1/chat/completions" \
-model llama3 \
-analysegenome 60 \
-savefp llama3-baseline.json \
-license-key YOUR_LICENSE_KEY
Load a saved fingerprint and probe the live model to check if it still matches. DNAI reports match scores across all verification dimensions and flags anomalies.
# Verify GPT-4o against its saved fingerprint
dnai -provider openai \
-model gpt-4o \
-loadfp gpt4o-baseline.json \
-probe 30 \
-json \
-license-key YOUR_LICENSE_KEY
# Output includes match result, confidence score, and detailed analysis
-json for structured output suitable for automation.
Fingerprint and verify Model Context Protocol servers. DNAI probes the server via JSON-RPC 2.0, performing multi-dimensional integrity analysis to create a comprehensive server genome.
# Fingerprint an MCP server
dnai -mcpprobe "http://localhost:3000/mcp" \
-mcpsave mcp-baseline.json \
-license-key YOUR_LICENSE_KEY
# Later: verify the live server against the baseline
dnai -mcpprobe "http://localhost:3000/mcp" \
-mcpload mcp-baseline.json \
-mcpverify \
-mcpjson \
-license-key YOUR_LICENSE_KEY
Stream verification events to your SIEM platform. All SIEM modes support retry with exponential backoff, configurable timeouts, and mTLS.
dnai -provider openai -model gpt-4o \
-loadfp gpt4o-baseline.json -probe 30 \
-siem splunk \
-splunk.url "https://splunk.example.com:8088" \
-splunk.token "your-hec-token" \
-splunk.index "ai_integrity" \
-license-key YOUR_LICENSE_KEY
dnai -provider openai -model gpt-4o \
-loadfp gpt4o-baseline.json -probe 30 \
-siem sentinel \
-sentinel.workspace "your-workspace-id" \
-sentinel.key "your-shared-key-base64" \
-license-key YOUR_LICENSE_KEY
dnai -provider openai -model gpt-4o \
-loadfp gpt4o-baseline.json -probe 30 \
-siem datadog \
-datadog.api-key "your-dd-api-key" \
-datadog.site "datadoghq.com" \
-license-key YOUR_LICENSE_KEY
dnai -provider openai -model gpt-4o \
-loadfp gpt4o-baseline.json -probe 30 \
-siem elastic \
-elastic.url "https://localhost:9200" \
-elastic.index "ai-dna-verifications" \
-elastic.api-key "your-api-key" \
-license-key YOUR_LICENSE_KEY
dnai -provider openai -model gpt-4o \
-loadfp gpt4o-baseline.json -probe 30 \
-siem loki \
-loki.url "http://localhost:3100" \
-loki.labels "app=ai-dna,env=prod" \
-license-key YOUR_LICENSE_KEY
dnai -provider openai -model gpt-4o \
-loadfp gpt4o-baseline.json -probe 30 \
-siem webhook \
-webhook.url "https://hooks.example.com/ai-integrity" \
-webhook.headers "Authorization:Bearer tok123,X-Source:dnai" \
-license-key YOUR_LICENSE_KEY
Run DNAI in any OCI-compatible container runtime. The examples below cover Docker and Kubernetes — create a fingerprint once with persistent storage, then monitor continuously from the same volume.
# DNAI — AI Genome Fingerprinting Container
FROM alpine:latest
# Download latest DNAI binary
RUN apk add --no-cache curl \
&& curl -fSL -o /usr/local/bin/dnai \
"https://www.cyberautomation.com.au/DNAI/dist/linux-amd64/DNAI" \
&& chmod +x /usr/local/bin/dnai
# Persistent volume for fingerprints
VOLUME /data
ENTRYPOINT ["dnai"]
version: "3.8"
services:
dnai-fingerprint:
build: .
volumes:
- dnai-data:/data
environment:
- OPENAI_API_KEY=sk-your-api-key # ← Replace
command:
- -provider
- openai # ← Your provider
- -model
- gpt-4o # ← Your model
- -analysegenome
- "60"
- -savefp
- /data/baseline.json
- -license-key
- YOUR_LICENSE_KEY # ← Replace
volumes:
dnai-data:
docker compose up dnai-fingerprint to create the genome fingerprint. Replace the provider, model, API key, and license key with your values. The fingerprint is saved to the dnai-data volume at /data/baseline.json.
version: "3.8"
services:
dnai-monitor:
build: . # Same Dockerfile as above
volumes:
- dnai-data:/data # Same volume — reads baseline.json
environment:
- OPENAI_API_KEY=sk-your-api-key # ← Replace
command:
- -provider
- openai
- -model
- gpt-4o
- -loadfp
- /data/baseline.json # Fingerprint from step above
- -probe
- "30"
- -json
- -license-key
- YOUR_LICENSE_KEY # ← Replace
# Optional: stream results to your SIEM
# - -siem
# - splunk
# - -splunk.url
# - "https://splunk.example.com:8088"
# - -splunk.token
# - "your-hec-token"
volumes:
dnai-data:
# Run verification every 6 hours via cron
# Add to crontab: crontab -e
0 */6 * * * docker compose run --rm dnai-monitor >> /var/log/dnai-monitor.log 2>&1
dnai-data volume as the fingerprinting container, so it reads the baseline created in the previous step. Schedule docker compose run --rm dnai-monitor via cron, systemd timer, or your CI/CD pipeline for continuous integrity monitoring.
# Create with: kubectl create secret generic dnai-secrets \
# --from-literal=api-key=sk-your-api-key \
# --from-literal=license-key=YOUR_LICENSE_KEY
apiVersion: v1
kind: Secret
metadata:
name: dnai-secrets
type: Opaque
stringData:
api-key: sk-your-api-key # ← Replace
license-key: YOUR_LICENSE_KEY # ← Replace
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dnai-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
apiVersion: batch/v1
kind: Job
metadata:
name: dnai-fingerprint
spec:
backoffLimit: 2
template:
spec:
initContainers:
- name: download-dnai
image: alpine:latest
command: ["sh", "-c"]
args:
- |
apk add --no-cache curl &&
curl -fSL -o /data/dnai \
"https://www.cyberautomation.com.au/DNAI/dist/linux-amd64/DNAI" &&
chmod +x /data/dnai
volumeMounts:
- name: dnai-data
mountPath: /data
containers:
- name: dnai
image: alpine:latest
command: ["/data/dnai"]
args:
- "-provider"
- "openai" # ← Your provider
- "-model"
- "gpt-4o" # ← Your model
- "-analysegenome"
- "60"
- "-savefp"
- "/data/baseline.json"
- "-license-key"
- "$(LICENSE_KEY)"
env:
- name: OPENAI_API_KEY # ← Match your provider's env var
valueFrom:
secretKeyRef:
name: dnai-secrets
key: api-key
- name: LICENSE_KEY
valueFrom:
secretKeyRef:
name: dnai-secrets
key: license-key
volumeMounts:
- name: dnai-data
mountPath: /data
volumes:
- name: dnai-data
persistentVolumeClaim:
claimName: dnai-data
restartPolicy: Never
kubectl apply -f dnai-secret.yaml -f dnai-pvc.yaml -f dnai-fingerprint-job.yaml. The init container downloads the latest DNAI binary, then the main container runs the genome fingerprint analysis. The fingerprint is persisted to the dnai-data PVC at /data/baseline.json. Replace the provider, model, API key, and license key with your values.
apiVersion: batch/v1
kind: CronJob
metadata:
name: dnai-monitor
spec:
schedule: "0 */6 * * *" # Every 6 hours
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 3
jobTemplate:
spec:
backoffLimit: 2
template:
spec:
initContainers:
- name: download-dnai
image: alpine:latest
command: ["sh", "-c"]
args:
- |
apk add --no-cache curl &&
curl -fSL -o /data/dnai \
"https://www.cyberautomation.com.au/DNAI/dist/linux-amd64/DNAI" &&
chmod +x /data/dnai
volumeMounts:
- name: dnai-data
mountPath: /data
containers:
- name: dnai
image: alpine:latest
command: ["/data/dnai"]
args:
- "-provider"
- "openai"
- "-model"
- "gpt-4o"
- "-loadfp"
- "/data/baseline.json" # Fingerprint from Job above
- "-probe"
- "30"
- "-json"
- "-license-key"
- "$(LICENSE_KEY)"
# Optional: stream results to your SIEM
# - "-siem"
# - "splunk"
# - "-splunk.url"
# - "https://splunk.example.com:8088"
# - "-splunk.token"
# - "your-hec-token"
env:
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: dnai-secrets
key: api-key
- name: LICENSE_KEY
valueFrom:
secretKeyRef:
name: dnai-secrets
key: license-key
volumeMounts:
- name: dnai-data
mountPath: /data
volumes:
- name: dnai-data
persistentVolumeClaim:
claimName: dnai-data
restartPolicy: Never
dnai-data PersistentVolumeClaim as the fingerprinting Job, so it reads the baseline automatically. The schedule runs every 6 hours — adjust the cron expression to suit your monitoring frequency. Apply with kubectl apply -f dnai-monitor-cronjob.yaml.
# Compare two fingerprints side-by-side
dnai -diff1 gpt4o-jan.json -diff2 gpt4o-mar.json \
-license-key YOUR_LICENSE_KEY
# Use finance-specific genome profile for fingerprinting
dnai -provider openai -model gpt-4o \
-analysegenome 60 \
-promptset finance \
-savefp gpt4o-finance.json \
-license-key YOUR_LICENSE_KEY
# Available sets: general, medical, finance, energy, legal, cybersecurity, education, retail
# Save progress during long probes (resumes on interrupt)
dnai -provider openai -model gpt-4o \
-analysegenome 100 \
-checkpoint gpt4o-progress.jsonl \
-savefp gpt4o-baseline.json \
-workers 5 \
-license-key YOUR_LICENSE_KEY
# Tune verification sensitivity with custom weights
dnai -provider openai -model gpt-4o \
-loadfp gpt4o-baseline.json -probe 30 \
-license-key YOUR_LICENSE_KEY
# Use dnai -h to see all available weight and threshold flags
DNAI ships with a comprehensive set of flags for fingerprinting, verification, MCP probing, SIEM integration, and more. Run dnai -h to see the full list of options and their defaults.
dnai -h for details.
Single binary, zero dependencies. Download for your platform, add your license key, and start fingerprinting.
Requirements: A valid license key is required. Run DNAI -h for usage. All binaries are code-signed and notarised where applicable.
Contact our team to get a license key, request a demo, or discuss enterprise deployment options.