Quick Start
This walkthrough takes you from a fresh install to a real POST /api/{uuid}/chat call hitting OpenAI through PromptGate. You will:
- Log in.
- Add an OpenAI credential.
- Create a project and an AI endpoint.
- Issue a token.
- Call the endpoint with
curl/ Python / Node.
If you haven’t installed PromptGate yet, do that first: Installation.
1. Log in and change the default password
Section titled “1. Log in and change the default password”Open http://localhost:8000. Log in with:
Email: admin@promptgate.devPassword: adminClick your name in the top-right corner → Profile → change the password. (You only need to do this once.)
2. Add a provider credential
Section titled “2. Add a provider credential”Top-right user menu → Credentials. Click New credential.
- Name:
OpenAI Production - Provider:
OpenAI - Secret: paste your
sk-…key
Save. The secret is encrypted with AES-256-GCM at rest. You can see the prefix only after this point.
3. Create a project
Section titled “3. Create a project”Sidebar → Projects → + New project.
- Name:
Quickstart - Type:
AI Gateway - Environment:
prod
You’ll land on the project switcher; click your new project to enter it. The sidebar now shows project-scoped items: AI Endpoints, Playground, Live Logs, Metrics, Guardrails, API Tokens, Webhooks.
4. Create an AI endpoint
Section titled “4. Create an AI endpoint”Project sidebar → AI Endpoints → + New endpoint.
The endpoint wizard has 7 tabs, but you only need to fill out two for this walkthrough:
Tab 1 — Core:
- Name:
Hello World - (Slug auto-fills as
hello-world)
Tab 2 — Provider:
- Mode:
Manual - Provider:
OpenAI - Model:
gpt-4o-mini - Credential: pick the one you just added
Skip Limits, Streaming, Sessions, Prompt, Schema for now — defaults are fine. Hit Create.
You’ll land on the endpoint detail page. Click the eye icon next to the slug to see the endpoint URL — something like:
POST http://localhost:8000/api/8e3f...c2/hello-world5. Issue an API token
Section titled “5. Issue an API token”Project sidebar → API Tokens → + New token.
- Name:
quickstart - Environment:
live - Scopes: tick
chat
Save. The plaintext token (pg_live_…) is shown once. Copy it now.
6. Call the endpoint
Section titled “6. Call the endpoint”Replace <UUID> with your project’s UUID (visible in the endpoint detail URL) and <TOKEN> with the plaintext you just copied.
curl -X POST http://localhost:8000/api/<UUID>/hello-world \ -H "Authorization: Bearer <TOKEN>" \ -H "Content-Type: application/json" \ -d '{"message": "Say hi in one sentence."}'Python
Section titled “Python”import os, requests
resp = requests.post( f"http://localhost:8000/api/{os.environ['PG_UUID']}/hello-world", headers={"Authorization": f"Bearer {os.environ['PG_TOKEN']}"}, json={"message": "Say hi in one sentence."},)resp.raise_for_status()print(resp.json())Node.js
Section titled “Node.js”const uuid = process.env.PG_UUID;const token = process.env.PG_TOKEN;
const r = await fetch(`http://localhost:8000/api/${uuid}/hello-world`, { method: 'POST', headers: { 'Authorization': `Bearer ${token}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ message: 'Say hi in one sentence.' }),});console.log(await r.json());You should see something like:
{ "ok": true, "id": "chatcmpl-...", "model": "gpt-4o-mini", "content": "Hi there! How can I help today?", "finish_reason": "stop", "usage": { "prompt_tokens": 14, "completion_tokens": 9, "total_tokens": 23 }}7. Watch it in the UI
Section titled “7. Watch it in the UI”While that request is firing, navigate to Live Logs in the project sidebar — you’ll see the request appear in real time, with status, latency, and token counts. Metrics rolls these up into 24h / 7d charts, and Audit Log records the create/use of the endpoint and token.
What you just did
Section titled “What you just did”- ✅ Encrypted a provider credential at rest
- ✅ Created a project-scoped AI endpoint with a fixed provider + model
- ✅ Issued a scoped, hashed API token
- ✅ Routed a chat completion through the gateway
Next steps
Section titled “Next steps”- AI Endpoints — system prompts, schemas, sessions, streaming, failover.
- Guardrails — turn on PII redaction, prompt-injection blocking, keyword blocklists.
- Rate Limits & Budgets — caps per-minute, per-hour, per-month.
- AI Wrapper — point any OpenAI SDK at the gateway directly.
- Cookbook — task-oriented walkthroughs.
© Akyros Labs LLC. All rights reserved.