The cognest.config.yaml file is the central configuration for your Cognest project. It defines your integrations, engine settings, skills, and deployment options.
# cognest.config.yaml — Full Reference
# ─────────────────────────────────────
# Project metadata
project:
name: my-assistant
version: 1.0.0
description: "AI assistant for customer support"
# Think Engine configuration
engine:
provider: openai # openai | anthropic | local
model: gpt-4o # Model identifier
temperature: 0.7 # 0.0–2.0, lower = more deterministic
max_tokens: 2048 # Max response tokens
system_prompt: |
You are a helpful customer support assistant.
Always be professional, concise, and empathetic.
context:
max_history: 20 # Messages to include in context
include_metadata: true # Include event metadata in context
memory_backend: local # local | redis | postgres
# Integration configurations
integrations:
whatsapp:
enabled: true
credentials:
phone_number_id: ${WHATSAPP_PHONE_ID}
access_token: ${WHATSAPP_ACCESS_TOKEN}
settings:
webhook_verify_token: ${WEBHOOK_VERIFY_TOKEN}
auto_reply: true
typing_indicator: true
slack:
enabled: true
credentials:
bot_token: ${SLACK_BOT_TOKEN}
signing_secret: ${SLACK_SIGNING_SECRET}
settings:
channels: ["#support", "#general"]
thread_replies: true
gmail:
enabled: true
credentials:
client_id: ${GMAIL_CLIENT_ID}
client_secret: ${GMAIL_CLIENT_SECRET}
settings:
poll_interval: 60 # seconds
labels: ["INBOX"]
# Skills configuration
skills:
installed:
- email-triage
- calendar-sync
- ticket-router
custom_dir: ./skills # Local skills directory
nesthub:
auto_update: true
registry: https://hub.cognest.ai
# Server settings
server:
port: 3000
host: 0.0.0.0
cors:
origins: ["*"]
rate_limit:
window: 60 # seconds
max_requests: 100
# Logging
logging:
level: info # debug | info | warn | error
format: json # json | pretty
destination: stdout # stdout | file
file_path: ./logs/cognest.log| Field | Type | Default | Description |
|---|---|---|---|
| name | string | — | Project name (used in logs and dashboard) |
| version | string | 1.0.0 | Semantic version of your project |
| description | string | — | Human-readable project description |
| Field | Type | Default | Description |
|---|---|---|---|
| provider | string | openai | LLM provider: openai, anthropic, or local |
| model | string | gpt-4o | Model identifier for the chosen provider |
| temperature | number | 0.7 | Sampling temperature (0.0–2.0) |
| max_tokens | number | 2048 | Maximum tokens in LLM response |
| system_prompt | string | — | System prompt for the Think Engine |
| context.max_history | number | 20 | Max messages in conversation context |
| context.include_metadata | boolean | true | Include event metadata in LLM context |
| context.memory_backend | string | local | State persistence: local, redis, postgres |
Use the ${VAR_NAME} syntax to reference environment variables in your config file. Cognest resolves these at startup time.
# Environment variable references
integrations:
slack:
credentials:
bot_token: ${SLACK_BOT_TOKEN} # Reads from process.env
signing_secret: ${SLACK_SECRET} # Required at startup
# With defaults (using || syntax)
engine:
model: ${LLM_MODEL||gpt-4o} # Falls back to gpt-4oCreate environment-specific config files that extend the base configuration:
# File structure
cognest.config.yaml # Base config
cognest.config.development.yaml # Dev overrides
cognest.config.production.yaml # Production overrides
# Run with a specific environment
COGNEST_ENV=production cognest devConfig validation
Run 'cognest config validate' to check your configuration file for errors before deploying.