v1.4

Quickstart

Deploy FowyldAI and run your first query in under five minutes.

Prerequisites

Step 1: Pull and Run

Start FowyldAI with a single command:

With GPU

docker run -d \
  --name fowyldai \
  --gpus all \
  -p 8000:8000 \
  fowyldai/engine:latest

CPU only

docker run -d \
  --name fowyldai \
  -p 8000:8000 \
  fowyldai/engine:cpu-latest
First startup takes 1-2 minutes The engine loads models into memory on first boot. Subsequent starts are faster.

Step 2: Verify

Check that the engine is healthy:

curl http://localhost:8000/health

Expected response:

{
  "status": "healthy",
  "version": "1.4.0",
  "models_loaded": 1,
  "sovereign": true,
  "uptime_seconds": 42
}

Step 3: Ask Your First Question

curl -X POST http://localhost:8000/ask \
  -H "Content-Type: application/json" \
  -d '{
    "query": "What are the top 3 cybersecurity risks for small businesses?"
  }'

FowyldAI returns a structured, reasoned response — no cloud calls, no data leaves your machine.

Step 4: Try the OpenAI-Compatible API

Use FowyldAI as a drop-in replacement for OpenAI. Point any OpenAI-compatible client to your local instance:

curl -X POST http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "fowyld-default",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Explain zero-trust architecture in 3 sentences."}
    ]
  }'
That's it. FowyldAI is running on your infrastructure. No API keys. No cloud account. No data exfiltration.

What's Next