SentienGuard
Home>Why SentienGuard>Architecture

Autonomous Architecture

Autonomous Infrastructure
Wasn't Practical Until 2024

Immutable infrastructure, comprehensive telemetry, and AI pattern matching converged into a production-ready stack. This isn't theoretical—it's operating in production managing 500-node deployments with 87% autonomous resolution. Here's exactly how it works.

End-to-End: Detection → Decision → Execution → Verification

Five-layer architecture stack. Each layer feeds the next. Every action is verified and logged immutably.

5

Audit & Compliance

Immutable Logs (S3 Object Lock, SHA-256 chains)

Compliance Exports (SOC 2, HIPAA, PCI-DSS)

RBAC Approval Trails

4

Execution & Verification

Playbook Executor (YAML → Commands)

Health Verification (pre-check, post-check)

Rollback Engine (auto-rollback on failure)

Concurrency Control

3

AI Decision Engine

RAG Pipeline (incident → embedding → playbook match)

Confidence Scoring (0.0–1.0, threshold-based routing)

Context Enrichment (tags, history, environment)

2

Anomaly Detection

Dynamic Baselines (rolling 7-day average)

Statistical Anomaly Detection (σ-based thresholds)

Pattern Recognition (recurring incidents)

1

Observability & Telemetry

Metrics (Prometheus, CloudWatch, OpenTelemetry)

Logs (structured JSON, syslog, application logs)

Traces (distributed tracing, request flows)

Events (Kubernetes, cloud provider events)

Your Infrastructure (AWS/GCP/Azure, Kubernetes, VMs, Databases)

Layer 1: What Gets Collected
(And Why It Matters)

Comprehensive telemetry is the foundation. Without sufficient context, AI just pages humans—it can't differentiate.

Agent Deployment (Per Host)

50 MB binary per host. Collects metrics, watches logs, listens for events. Ships via TLS 1.3 with cert pinning.

Metric Collector (30s interval)

System metrics (CPU, memory, disk, network)

Process metrics (top processes, zombies)

Database metrics (via pg_stat_*)

Custom metrics (StatsD endpoint)

Log Watcher

/var/log/syslog

/var/log/postgresql/*.log

/var/log/nginx/error.log

Application logs (journalctl)

Event Listener

Systemd events (service start/stop)

Kernel events (OOM killer, disk errors)

Network events (interface down, packet loss)

Metrics Collected (Every 30 Seconds)

json
{
  "host": "prod-db-01.internal",
  "environment": "production",
  "tags": {
    "service": "postgresql",
    "role": "primary",
    "compliance": "hipaa"
  },
  "metrics": {
    "system": {
      "cpu_percent": 42.3,
      "memory_percent": 68.1,
      "disk_usage_percent": {
        "/var/lib/postgresql": 94.8
      },
      "load_average_1m": 2.34
    },
    "database": {
      "connections_active": 87,
      "connections_idle": 89,
      "connections_max": 95,
      "connections_utilization": 0.93,
      "cache_hit_ratio": 0.97
    }
  }
}

Insufficient context (traditional monitoring)

Metric: disk_usage_percent = 95%

Alert: "Disk 95% on prod-db-01"

Problem: Which disk? Why full? How fast is it growing?

Sufficient context (comprehensive telemetry)

Metric: disk["/var/lib/postgresql"] = 94.8%

Growth: +3.2%/hr • Top consumer: WAL files 8.3 GB

Pattern: 3 similar incidents in 30 days

Decision: WAL archival failure. Playbook: postgres_wal_archival_recovery (confidence: 0.96)

With context, AI makes intelligent decisions. Without context, AI just pages humans (can't differentiate).

Layer 2: Dynamic Baselines
Beat Static Thresholds

Static thresholds cause false positives and late detection. Dynamic baselines learn your infrastructure's normal patterns.

The Static Threshold Problem

⚠️

False Positives

Disk fills to 92% during nightly backup. Alert fires every night at 2 AM. Engineer learns to ignore.

Late Detection

Disk at 85% but growing 10%/hr. Alert at 90% gives only 30 min warning. Too late to prevent outage.

No Context

90% disk on database = CRITICAL. 90% disk on log server = NORMAL. Same threshold, different severity.

Dynamic Baseline Calculation

python
def calculate_baseline(metric_name, host, window_days=7):
    """
    Calculate dynamic baseline from historical data.
    7 days x 24 hours x 120 samples/hour = 20,160 samples
    """
    historical_data = fetch_metrics(
        metric=metric_name,
        host=host,
        start=now() - timedelta(days=window_days),
        end=now()
    )

    # Remove outliers (3σ rule)
    mean = np.mean(historical_data)
    stddev = np.std(historical_data)
    filtered = [
        x for x in historical_data
        if abs(x - mean) <= 3 * stddev
    ]

    return {
        "mean": np.mean(filtered),
        "stddev": np.std(filtered),
        "p50": np.percentile(filtered, 50),
        "p95": np.percentile(filtered, 95),
        "p99": np.percentile(filtered, 99),
    }
python
def detect_anomaly(current_value, baseline, sensitivity="medium"):
    """
    Detect if current value is anomalous.
    Uses σ-based thresholds (standard deviations from mean).
    """
    sigma = abs(current_value - baseline["mean"]) / baseline["stddev"]

    thresholds = {
        "low": 5.0,      # 99.9999% confidence
        "medium": 3.0,   # 99.7% confidence
        "high": 2.0      # 95% confidence
    }

    is_anomaly = sigma >= thresholds[sensitivity]

    if sigma >= 5.0:   severity = "critical"
    elif sigma >= 3.0: severity = "warning"
    elif sigma >= 2.0: severity = "info"
    else:              severity = None

    return {
        "is_anomaly": is_anomaly,
        "severity": severity,
        "sigma": sigma,
        "confidence": min(sigma / 5.0, 1.0),
    }

# Example: Disk 94.8% on prod-db-01
# Sigma: (94.8 - 72.3) / 4.8 = 4.69σ
# Result: anomaly=True, severity=warning, confidence=0.94

Static vs Dynamic: Real Scenarios

Scenario 1: Nightly Backup (Expected Spike)

Static threshold (90%)

Disk: 71% → 92% (backup) → 72%. Alert FIRES at 92%. False positive every night.

Dynamic baseline

Time-aware baseline knows 92% is normal at 2 AM (within p99). No alert. Correctly learned pattern.

Scenario 2: Rapid Growth (Actual Problem)

Static threshold (90%)

Alert at 90% when disk growing 10%/hr. Only 30 min warning before 100%.

Dynamic baseline

Alert at 85% (2.6\u03C3 + rapid trend). 1.5 hour warning instead of 30 min.

Result: Fewer false positives, earlier true positive detection.

Layer 3: How Semantic Playbook
Matching Actually Works

RAG (Retrieval-Augmented Generation) uses vector similarity to find the right playbook for any incident—including ones never seen before.

1

Context Enrichment

Convert incident metrics + logs + anomaly data into natural language description for embedding

2

Embedding Generation

text-embedding-3-large produces 3,072-dimensional vector (80ms, $0.00013)

3

Vector Search

Cosine similarity search across 487 playbooks in Pinecone, return top 5 matches

4

Confidence Scoring & Decision

Base similarity + historical success rate + recurring pattern bonus → final confidence → routing decision

Vector Search Results (Ranked by Similarity)

1

postgres_connection_pool_reset

Similarity: 0.94Success: 94%Avg: 28s
2

postgres_connection_pool_scale

Similarity: 0.87Success: 89%Avg: 45s
3

mysql_connection_pool_reset

Similarity: 0.76Success: 92%Avg: 32s
4

postgres_restart_service

Similarity: 0.68Success: 85%Avg: 60s
5

redis_connection_limit

Similarity: 0.43Success: 88%Avg: 15s

Confidence Scoring & Decision Routing

≥ 0.95
Execute immediately
≥ 0.90
Execute with verification
≥ 0.80
Require human approval
≥ 0.70
Dry-run mode only
< 0.70
Escalate to human

Why RAG Works (vs Traditional Approaches)

ApproachHow It WorksProsCons
If/Then Rulesif metric == X and value > Y: action()Simple, deterministicBrittle, specification explosion, no generalization
Classification MLTrain model to classify incidents → actionsLearns patternsNeeds 10K+ training examples, can’t handle novel incidents
RAG (Semantic Search)Embed incident + playbooks, find nearest matchZero-shot learning, generalizesRequires good playbook descriptions

Zero-shot learning

No training data needed. New playbook = just add description (no retraining).

Generalization

"CockroachDB connection pool" finds "Postgres connection pool" (similar patterns).

Explainability

Similarity score shows why playbook was selected. Inspect description to understand match.

Continuous improvement

Better descriptions → better matches. No retraining needed.

Layer 4: Safety Through
Idempotency and Health Checks

Every playbook has pre-checks, verification steps, rollback procedures, and concurrency control. Safety is structural, not optional.

Complete Playbook: postgres_connection_pool_reset.yaml

Production playbook with safety checks, execution steps, verification, rollback, and audit logging.

yaml
pre_execution_checks:
  - name: verify_anomaly_severity
    type: condition
    condition: anomaly_sigma >= 3.0
    fail_action: abort
    reason: "Only run for significant anomalies (>=3σ)"

  - name: verify_database_healthy
    type: sql_query
    query: "SELECT 1 as health_check;"
    expect_rows: 1
    timeout_seconds: 5
    fail_action: abort
    reason: "Don't touch database if already unhealthy"

  - name: verify_not_in_maintenance
    type: tag_check
    tag: maintenance_mode
    expect_value: false
    fail_action: abort

  - name: verify_no_concurrent_playbooks
    type: concurrency_check
    scope: host
    fail_action: wait
    max_wait_seconds: 60

Idempotency: Why It Enables Autonomous Execution

Non-Idempotent (Dangerous)

Action: "Terminate connection with PID 12345"

Run 1: Success (PID 12345 terminated)

Run 2: Failure (PID 12345 no longer exists, throws error)

Not safe to retry

Idempotent (Safe)

Action: "Terminate all idle connections >1 hour"

Run 1: Success (89 connections terminated)

Run 2: Success (0 found matching criteria, no-op)

Safe to retry, no side effects

ALL autonomous playbooks must be idempotent.

Network interruptions, partial failures, and verification retries are all safe when operations are idempotent. Non-idempotent playbooks require manual approval.

Health Verification (Pre & Post)

Pre-Execution: Don't Make Things Worse

Can we connect to database? (SELECT 1 → if fails: escalate)

Are there write errors already? (if found: escalate)

Is replication working? (if broken: don’t risk primary)

Is this a maintenance window? (if true: abort)

Only proceed if database is healthy (just connection pool issue)

Post-Execution: Confirm Fix Worked

Pool healthy? (connections < 80 → if fails: rollback)

New connection works? (within 5s → if fails: rollback)

Write operations work? (INSERT test → if fails: rollback)

Application healthy? (HTTP 200 → if fails: rollback)

No new log errors? (0 errors in 60s → if fails: warn)

Only mark success if ALL verifications pass

Layer 5: Every Action Logged
Immutably Forever

Complete incident records with execution logs, verification results, and tamper-proof hash chains. SOC 2, HIPAA, and PCI-DSS compliant.

Audit Log Structure

json
{
  "incident_id": "inc_20260212_112600_a8f3b2c1",
  "timestamp_start": "2026-02-12T11:26:00.000Z",
  "timestamp_end": "2026-02-12T11:26:28.000Z",
  "duration_seconds": 28,

  "incident_context": {
    "host": "prod-db-01.internal",
    "environment": "production",
    "tags": { "service": "postgresql", "compliance": "hipaa" },
    "anomaly": {
      "metric": "postgres.connection_pool.utilization",
      "value": 0.98,
      "baseline_mean": 0.42,
      "sigma": 4.69,
      "severity": "warning"
    }
  },

  "decision": {
    "playbook_selected": "postgres_connection_pool_reset",
    "playbook_version": "1.4.2",
    "confidence": 0.96,
    "decision_rationale": "High confidence match (0.94 similarity + 0.02 recurring pattern bonus)",
    "execution_mode": "autonomous",
    "approval_required": false
  },

  "execution_log": [
    { "step": "diagnose_connection_pool", "result": { "idle": 89, "active": 87, "total": 176 }, "status": "success" },
    { "step": "terminate_idle_connections", "result": { "terminated": 89 }, "status": "success" },
    { "step": "verify_pool_reset", "result": { "active": 9, "total": 9 }, "status": "success" }
  ],

  "verification_log": [
    { "step": "test_new_connection", "result": "success", "duration_ms": 247 },
    { "step": "test_application_health", "result": "success", "http_status": 200 },
    { "step": "test_write_operation", "result": "success", "rows_inserted": 1 }
  ],

  "outcome": {
    "status": "success",
    "resolution_time_seconds": 28,
    "verification_passed": true,
    "rollback_triggered": false,
    "human_intervention_required": false
  },

  "audit_metadata": {
    "s3_object_lock": { "mode": "COMPLIANCE", "retain_until": "2032-02-12" },
    "hash_chain": {
      "current_hash": "sha256:d4f6a9b2c8e1f3a5b7d9...",
      "previous_hash": "sha256:b2c4d6e8f0a2b4c6d8e0..."
    },
    "signature": { "algorithm": "Ed25519" },
    "compliance_tags": ["soc2_cc6.1", "hipaa_164.312.b", "pci_dss_requirement_10"]
  }
}

Immutability Guarantee

S3 Object Lock (WORM)

Write Once Read Many. Cannot be modified, deleted, or overwritten—even by AWS root account in COMPLIANCE mode.

PASSCannot modify existing log
PASSCannot delete existing log
PASSCannot overwrite (versioning)
PASSCan detect tampering (hash chain)

Retention enforcement

6 years (HIPAA requirement). Move to Glacier Deep Archive after 2 years (96% cheaper).

Hash Chain (Tamper Detection)

Each log entry includes the hash of the previous entry. Any modification breaks the chain.

genesis
1Log 1
2Log 2
3Log 3
4...

Tamper detection:

If attacker modifies Log 2, chain breaks: Log 3.previous_hash \u2260 Log 2.current_hash. Attacker can't fix chain because S3 Object Lock prevents modification.

bash
python verify_logs.py audit_logs_2026.json

Verifying 23,445 log entries...
[PASS] All 23,445 logs verified
[PASS] Chain integrity: PASS
[PASS] No tampering detected

Putting It All Together

Complete incident flow through all 5 layers. Database connection pool incident: 11:26:00 → 11:26:28. 28 seconds, fully autonomous.

11:26:00.000Layer 1

Agent collects metrics (30s interval)

postgres.connection_pool.utilization = 98%, connections.idle = 89, connections.active = 87

11:26:00.250Layer 1

Agent parses logs

"ERROR: remaining connection slots reserved" + "FATAL: too many clients already"

11:26:00.500Layer 1

Payload sent to control plane

TLS 1.3, cert-pinned, 4.3 KB payload, 15ms latency

11:26:00.515Layer 2

Calculate baseline (7-day rolling)

Baseline mean: 42.3%, stddev: 8.4% (20,160 samples)

11:26:00.520Layer 2

Detect anomaly

Sigma: (98 − 42.3) / 8.4 = 4.69σ, severity: warning, confidence: 0.94

11:26:00.525Layer 2

Check trend

Rate: −0.2%/hr (stable). Pool full, not filling. Idle connection accumulation.

11:26:00.530Layer 3

Enrich context

Combine: metrics + logs + anomaly + trend + tags → natural language description

11:26:00.615Layer 3

Generate embedding

text-embedding-3-large, 3,072 dims, 85ms latency, $0.00013 cost

11:26:00.700Layer 3

Vector search (Pinecone)

Top match: postgres_connection_pool_reset (similarity: 0.94), k=5 results

11:26:00.790Layer 3

Confidence scoring → Decision

Final confidence: 0.96 (≥0.90 → Execute with verification)

11:26:00.800Layer 4

Pre-execution checks

DB health: PASS, maintenance mode: false, concurrent playbooks: none

11:26:01.050Layer 4

Diagnose connection pool

SQL query: 89 idle, 87 active, 176 total connections

11:26:02.255Layer 4

Terminate idle connections

pg_terminate_backend() on 89 idle connections >1hr, duration: 2.8s

11:26:07.055Layer 4

Verify pool reset

9 active, 9 total (9 < 80 threshold → PASS)

11:26:08.597Layer 4

Post-verification: connection + app + write tests

New connection: 247ms, HTTP health: 200 OK (156ms), write test: 1 row inserted

11:26:09.400Layer 5

Generate audit log

Complete incident record (JSON, 4,237 lines). SHA-256 hash computed.

11:26:09.500Layer 5

Upload to S3 (Object Lock)

COMPLIANCE mode, retain until 2032-02-12, hash chain updated

11:26:10.000Layer 5

Slack notification sent

Auto-resolved: postgres_connection_pool_reset on prod-db-01 (28s)

Incident Resolved: 28 Seconds

$0

Revenue lost

0 min

Engineer time

0

Customer churn

$351

Total cost

Engineer (Marcus): Still in meeting. Reviews Slack notification next morning over coffee (2 minutes). Customer: Never noticed.

The Architecture
Is Production-Ready.

The five-layer architecture—observability, anomaly detection, AI decision engine, execution & verification, and audit logging—is operating in production today. This isn't a research project. It's managing real infrastructure, resolving real incidents, and generating real compliance evidence.

Month 1

Deploy agents, validate architecture in your environment

Month 2

Prove 87% autonomous rate, measure MTTR improvement

Month 3

Generate compliance evidence, export audit logs for assessors

Free tier: 3 nodes forever, complete architecture access, full audit logging, no credit card required.