The fastest way to deploy. Cognest Cloud handles scaling, SSL, monitoring, and webhook endpoints automatically.
# Deploy to Cognest Cloud
cognest deploy
# Deploy to a specific environment
cognest deploy --env production
# Deploy with environment variables
cognest deploy --env production --env-file .env.productionZero-config webhooks
Cognest Cloud automatically provisions webhook URLs for each integration. No need to configure ngrok or set up your own public endpoints.
Self-host with the official Docker image. Cognest provides a Dockerfile and docker-compose.yml for production-ready deployments.
# Generate Docker configuration
cognest docker init
# Build and run
docker compose up -d
# View logs
docker compose logs -f cognest# docker-compose.yml (generated)
version: "3.9"
services:
cognest:
build: .
ports:
- "3000:3000"
env_file:
- .env.production
volumes:
- cognest-data:/app/data
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
redis:
image: redis:7-alpine
volumes:
- redis-data:/data
restart: unless-stopped
volumes:
cognest-data:
redis-data:Run Cognest as a standalone Node.js process on any VPS, EC2 instance, or PaaS provider.
# Build for production
cognest build
# Start the production server
NODE_ENV=production cognest start
# Or use PM2 for process management
pm2 start cognest --name my-assistant -- startDeploy to serverless platforms like Vercel, AWS Lambda, or Cloudflare Workers. Cognest provides adapter packages for each platform.
// api/cognest/route.ts
import { Cognest } from '@cognest/sdk'
import { createVercelHandler } from '@cognest/adapter-vercel'
const cognest = new Cognest()
export const { GET, POST } = createVercelHandler(cognest)Cognest exposes a /health endpoint that returns the status of all configured integrations and the Think Engine. Use this for load balancer health checks and monitoring.
{
"status": "healthy",
"version": "1.2.0",
"uptime": 86400,
"integrations": {
"whatsapp": { "status": "connected", "latency_ms": 45 },
"slack": { "status": "connected", "latency_ms": 32 },
"gmail": { "status": "connected", "latency_ms": 120 }
},
"engine": {
"status": "ready",
"provider": "openai",
"model": "gpt-4o"
}
}