SentinAI Docs

API Reference

guide/api-reference.md

API Reference

Complete reference for SentinAI REST API and MCP endpoints.


Base URL

Local Development:

http://localhost:3002

Production:

https://your-sentinai-instance.com

Authentication

API Key (Optional)

When SENTINAI_API_KEY is configured, all write operations require authentication:

curl -X POST https://sentinai.example.com/api/scaler \
  -H "x-api-key: your-api-key-here" \
  -H "Content-Type: application/json" \
  -d '{"action": "scale", "targetVCpu": 4}'

Protected Operations:

  • POST, PUT, PATCH, DELETE requests
  • Exempt: /api/health, /api/agent-loop, /api/metrics/seed

Core Endpoints

GET /api/health

System health check.

Response:

{
  "status": "ok",
  "timestamp": "2026-02-23T07:00:00.000Z",
  "l2Connected": true,
  "k8sConnected": true
}

Status Codes:

  • 200: All systems operational
  • 503: L2 or K8s connection failed

GET /api/metrics

Current system metrics and anomaly status.

Query Parameters:

  • includeHistory (optional): Include time-series data (default: false)

Response:

{
  "metrics": {
    "blockHeight": 12345678,
    "cpuUsage": 45.3,
    "txPoolCount": 23,
    "gasUsedRatio": 0.78,
    "blockInterval": 2000
  },
  "anomalies": [
    {
      "metric": "cpuUsage",
      "value": 87.3,
      "zScore": 3.2,
      "direction": "up",
      "severity": "medium",
      "description": "CPU spike detected"
    }
  ],
  "components": [
    { "name": "op-geth", "status": "healthy", "cpu": 45.3 },
    { "name": "op-node", "status": "healthy", "cpu": 12.1 }
  ],
  "cost": {
    "opGethMonthlyCost": 73.44,
    "currentVCpu": 2
  }
}

POST /api/metrics/seed

Inject test scenario data (development/demo only).

Body:

{
  "scenario": "spike",  // "stable" | "spike" | "rising" | "falling"
  "dataPoints": 20
}

Response:

{
  "status": "success",
  "injected": 20,
  "scenario": "spike"
}

GET /api/anomalies

Fetch recent anomaly events.

Query Parameters:

  • limit (optional): Max events to return (default: 10)
  • severity (optional): Filter by severity (low/medium/high/critical)

Response:

{
  "anomalies": [
    {
      "id": "evt_abc123",
      "timestamp": "2026-02-23T07:00:00.000Z",
      "metric": "cpuUsage",
      "value": 87.3,
      "zScore": 3.2,
      "severity": "medium",
      "resolved": false
    }
  ],
  "count": 1
}

POST /api/rca

Request root cause analysis for current anomaly.

Body:

{
  "anomalyEventId": "evt_abc123"  // optional, uses latest if omitted
}

Response:

{
  "eventId": "evt_abc123",
  "rootCause": "Derivation lag: op-node falling behind L1",
  "affectedComponents": ["op-node", "op-batcher"],
  "riskLevel": "high",
  "actionPlan": "Increase op-node CPU allocation; verify L1 RPC health",
  "confidence": 85,
  "analyzedAt": "2026-02-23T07:01:23.000Z"
}

POST /api/scaler

Execute scaling action (manual or policy-driven).

Body:

{
  "action": "scale",
  "targetVCpu": 4,
  "reason": "Manual scaling for load test"  // optional
}

Response (Simulation Mode):

{
  "status": "simulated",
  "decision": {
    "action": "scale",
    "targetVCpu": 4,
    "currentVCpu": 2,
    "reason": "Manual scaling for load test"
  },
  "message": "Scaling action logged (simulation mode active)"
}

Response (Live Mode):

{
  "status": "success",
  "decision": {
    "action": "scale",
    "targetVCpu": 4,
    "currentVCpu": 2,
    "executedAt": "2026-02-23T07:05:00.000Z"
  },
  "verificationStatus": "healthy",
  "cooldownUntil": "2026-02-23T07:10:00.000Z"
}

Status Codes:

  • 200: Action executed successfully
  • 403: Read-only mode enabled or in cooldown period
  • 400: Invalid parameters

GET /api/agent-decisions

Audit trail of recent scaling decisions.

Query Parameters:

  • limit (optional): Max decisions to return (default: 20)

Response:

{
  "decisions": [
    {
      "id": "dec_xyz789",
      "timestamp": "2026-02-23T07:05:00.000Z",
      "action": "scale",
      "targetVCpu": 4,
      "previousVCpu": 2,
      "reason": "Anomaly-driven scaling (cpuUsage spike)",
      "outcome": "success",
      "verificationStatus": "healthy"
    }
  ],
  "count": 1
}

POST /api/nlops

Natural language operations chat interface.

Body:

{
  "message": "What's the current CPU usage?",
  "conversationId": "conv_abc123"  // optional, for multi-turn context
}

Response:

{
  "reply": "Current CPU usage is 45.3% (2 vCPU allocated).",
  "conversationId": "conv_abc123",
  "actions": [],  // any tool calls executed
  "timestamp": "2026-02-23T07:10:00.000Z"
}

GET /api/cost-report

Cost analysis and optimization recommendations.

Response:

{
  "current": {
    "vCpu": 2,
    "monthlyCost": 73.44,
    "currency": "USD"
  },
  "optimizations": [
    {
      "recommendation": "Reduce to 1 vCPU during low-traffic periods",
      "potentialSavings": 36.72,
      "confidence": 85
    }
  ]
}

GET /api/goals

Goal manager status (autonomous agent goals).

Response:

{
  "activeGoals": [
    {
      "id": "goal_123",
      "description": "Maintain CPU < 80% during peak hours",
      "status": "active",
      "progress": 75,
      "createdAt": "2026-02-23T00:00:00.000Z"
    }
  ],
  "completedCount": 12,
  "failedCount": 2
}

POST /api/goals

Create new autonomous goal.

Body:

{
  "description": "Reduce transaction pool backlog to < 10",
  "priority": "high",
  "deadline": "2026-02-24T00:00:00.000Z"  // optional
}

Response:

{
  "goalId": "goal_124",
  "status": "created",
  "estimatedCompletion": "2026-02-23T12:00:00.000Z"
}

POST /api/remediation

Trigger auto-remediation for known issue patterns.

Body:

{
  "issueType": "sync-stall",  // "sync-stall" | "high-cpu" | "txpool-backlog"
  "autoApprove": false  // require approval for high-risk actions
}

Response:

{
  "remediationId": "rem_abc123",
  "steps": [
    "Restart op-node component",
    "Verify sync status recovery"
  ],
  "status": "pending-approval",
  "estimatedDuration": "5 minutes"
}

MCP Endpoints

MCP (Model Context Protocol) server for AI agent integration.

Base URL

http://localhost:3002/api/mcp

Available Tools

sentinai.getMetrics

Get current system metrics and anomaly status.

Arguments:

{
  "includeAnomalies": true,
  "includeHistory": false
}

Returns: Same as GET /api/metrics


sentinai.getRca

Get root cause analysis for latest anomaly.

Arguments:

{
  "anomalyEventId": "evt_abc123"  // optional
}

Returns: Same as POST /api/rca


sentinai.getPrediction

Get predictive scaling forecast.

Arguments:

{
  "horizonMinutes": 5
}

Returns:

{
  "predictedVCpu": 4,
  "confidence": 85,
  "trend": "rising",
  "keyFactors": ["TxPool growth", "Block interval variance"]
}

sentinai.executeAction

Execute approved scaling action.

Arguments:

{
  "action": "scale",
  "targetVCpu": 4,
  "reason": "AI agent recommendation"
}

Returns: Same as POST /api/scaler


sentinai.getAuditTrail

Fetch decision history.

Arguments:

{
  "limit": 20
}

Returns: Same as GET /api/agent-decisions


WebSocket API (Future)

Planned for Q1 2026:

Real-time metric streaming via WebSocket.

const ws = new WebSocket('wss://sentinai.example.com/api/stream');

ws.onmessage = (event) => {
  const data = JSON.parse(event.data);
  console.log(new Date().toISOString(), 'Metric update:', data);
};

// Expected payload:
{
  "type": "metric",
  "metric": "cpuUsage",
  "value": 45.3,
  "timestamp": "2026-02-23T07:00:00.000Z"
}

Rate Limits

Current: No enforced rate limits

Recommended Client Behavior:

  • Metrics polling: Max 1 req/30 seconds
  • Action execution: Max 1 req/5 minutes (respect cooldown)
  • RCA requests: Max 1 req/minute

Error Responses

Standard Error Format

{
  "error": "Error message",
  "code": "ERROR_CODE",
  "details": {}  // optional context
}

Common Error Codes

StatusCodeDescription
400INVALID_PARAMETERSMissing or invalid request params
401UNAUTHORIZEDMissing or invalid API key
403FORBIDDENRead-only mode or cooldown active
404NOT_FOUNDResource not found
429RATE_LIMIT_EXCEEDEDToo many requests
500INTERNAL_ERRORServer error
503SERVICE_UNAVAILABLEL2/K8s connection failed

Example: Full Incident Workflow

# 1. Check system health
curl http://localhost:3002/api/health

# 2. Get current metrics
curl http://localhost:3002/api/metrics

# 3. Detect anomaly → trigger RCA
curl -X POST http://localhost:3002/api/rca \
  -H "Content-Type: application/json" \
  -d '{"anomalyEventId": "evt_abc123"}'

# 4. Get predictive forecast
curl http://localhost:3002/api/agent-decisions

# 5. Execute scaling action
curl -X POST http://localhost:3002/api/scaler \
  -H "x-api-key: your-api-key" \
  -H "Content-Type: application/json" \
  -d '{"action": "scale", "targetVCpu": 4, "reason": "RCA recommendation"}'

# 6. Verify execution
curl http://localhost:3002/api/agent-decisions?limit=1

# 7. Check audit trail
curl http://localhost:3002/api/agent-decisions?limit=10

For architecture details, see Architecture Guide.
For MCP integration, see MCP User Guide.


Autonomous Operations Endpoints

POST /api/autonomous/plan

Build chain-aware autonomous plan from standard intent.

Body:

{
  "intent": "recover_sequencer_path",
  "dryRun": true,
  "allowWrites": false
}

POST /api/autonomous/execute

Execute autonomous operation by planId or direct intent.

Body:

{
  "intent": "stabilize_throughput",
  "dryRun": true,
  "allowWrites": false
}

Notes:

  • When dryRun=false and allowWrites=true, x-api-key is required.

POST /api/autonomous/verify

Verify autonomous operation post-condition.

Body:

{
  "operationId": "op-uuid",
  "before": { "blockHeight": 100 },
  "after": { "blockHeight": 101 }
}

POST /api/autonomous/rollback

Run rollback actions for failed autonomous steps.

Body:

{
  "operationId": "op-uuid",
  "dryRun": true
}

Notes:

  • Rollback API always requires x-api-key when SENTINAI_API_KEY is configured.

MCP Autonomous Tools

  • get_autonomous_capabilities
  • plan_autonomous_operation
  • execute_autonomous_operation
  • verify_autonomous_operation
  • rollback_autonomous_operation

Write tool policy:

  • execute_autonomous_operation, rollback_autonomous_operation follow the same auth/approval guardrail as existing write tools.