Skip to content

Config File

Overview#

The cognest.config.yaml file is the central configuration for your Cognest project. It defines your integrations, engine settings, skills, and deployment options.

Full Reference#

cognest.config.yaml
# cognest.config.yaml — Full Reference
# ─────────────────────────────────────

# Project metadata
project:
  name: my-assistant
  version: 1.0.0
  description: "AI assistant for customer support"

# Think Engine configuration
engine:
  provider: openai          # openai | anthropic | local
  model: gpt-4o             # Model identifier
  temperature: 0.7          # 0.0–2.0, lower = more deterministic
  max_tokens: 2048          # Max response tokens
  system_prompt: |
    You are a helpful customer support assistant.
    Always be professional, concise, and empathetic.
  context:
    max_history: 20          # Messages to include in context
    include_metadata: true   # Include event metadata in context
    memory_backend: local    # local | redis | postgres

# Integration configurations
integrations:
  whatsapp:
    enabled: true
    credentials:
      phone_number_id: ${WHATSAPP_PHONE_ID}
      access_token: ${WHATSAPP_ACCESS_TOKEN}
    settings:
      webhook_verify_token: ${WEBHOOK_VERIFY_TOKEN}
      auto_reply: true
      typing_indicator: true

  slack:
    enabled: true
    credentials:
      bot_token: ${SLACK_BOT_TOKEN}
      signing_secret: ${SLACK_SIGNING_SECRET}
    settings:
      channels: ["#support", "#general"]
      thread_replies: true

  gmail:
    enabled: true
    credentials:
      client_id: ${GMAIL_CLIENT_ID}
      client_secret: ${GMAIL_CLIENT_SECRET}
    settings:
      poll_interval: 60      # seconds
      labels: ["INBOX"]

# Skills configuration
skills:
  installed:
    - email-triage
    - calendar-sync
    - ticket-router
  custom_dir: ./skills       # Local skills directory
  nesthub:
    auto_update: true
    registry: https://hub.cognest.ai

# Server settings
server:
  port: 3000
  host: 0.0.0.0
  cors:
    origins: ["*"]
  rate_limit:
    window: 60               # seconds
    max_requests: 100

# Logging
logging:
  level: info                # debug | info | warn | error
  format: json               # json | pretty
  destination: stdout        # stdout | file
  file_path: ./logs/cognest.log

Project Section#

FieldTypeDefaultDescription
namestringProject name (used in logs and dashboard)
versionstring1.0.0Semantic version of your project
descriptionstringHuman-readable project description

Engine Section#

FieldTypeDefaultDescription
providerstringopenaiLLM provider: openai, anthropic, or local
modelstringgpt-4oModel identifier for the chosen provider
temperaturenumber0.7Sampling temperature (0.0–2.0)
max_tokensnumber2048Maximum tokens in LLM response
system_promptstringSystem prompt for the Think Engine
context.max_historynumber20Max messages in conversation context
context.include_metadatabooleantrueInclude event metadata in LLM context
context.memory_backendstringlocalState persistence: local, redis, postgres

Variable Substitution#

Use the ${VAR_NAME} syntax to reference environment variables in your config file. Cognest resolves these at startup time.

cognest.config.yaml
# Environment variable references
integrations:
  slack:
    credentials:
      bot_token: ${SLACK_BOT_TOKEN}      # Reads from process.env
      signing_secret: ${SLACK_SECRET}     # Required at startup

# With defaults (using || syntax)
engine:
  model: ${LLM_MODEL||gpt-4o}            # Falls back to gpt-4o

Multiple Environments#

Create environment-specific config files that extend the base configuration:

terminal
# File structure
cognest.config.yaml              # Base config
cognest.config.development.yaml  # Dev overrides
cognest.config.production.yaml   # Production overrides

# Run with a specific environment
COGNEST_ENV=production cognest dev

Config validation

Run 'cognest config validate' to check your configuration file for errors before deploying.