Installation & Configuration

Comprehensive guide to deploying the TestHide server stack and configuring distributed agents.

Docker Compose Configuration

The recommended way to deploy TestHide is using Docker Compose. This creates a complete stack with the frontend (Nginx), backend API (scalable replicas), AI API, AI Worker, MongoDB, and Redis.

docker-compose.prod.yaml
version: '3.8'
name: testhide

services:
  # ==========================================
  # FRONTEND (Nginx + Angular)
  # ==========================================
  frontend:
    container_name: testhide_frontend
    image: thuesdays/testhide-frontend:latest
    restart: always
    ports:
      - "80:80"
      - "443:443"
      - "7771:7771"
    env_file:
      - .env
    environment:
      - CERT_FILE=${CERT_FILE}
      - CERT_KEY=${CERT_KEY}
    volumes:
      - ./ssl:/etc/ssl:ro
      - ./nginx/api.conf:/etc/nginx/conf.d/api.conf:ro
      - testhide_static_data:/usr/share/nginx/html/static:ro
    depends_on:
      - backend
      - ai-api
    networks:
      - testhide-net
    deploy:
      resources:
        limits:
          cpus: "0.25"
          memory: 256m

  # ==========================================
  # BACKEND (Python API)
  # ==========================================
  backend:
    # Note: container_name removed to allow replicas
    # Note: No port exposed - accessed via nginx proxy
    image: thuesdays/testhide-backend:latest
    restart: unless-stopped
    env_file:
      - .env
    environment:
      - LOAD_AI_MODELS=false
      - RUN_AI_WORKER=false
      - GUNICORN_WORKERS=2
      - COMPOSE_PROJECT_NAME=testhide
    volumes:
      - testhide_static_data:/app/static
      - ./releases:/app/releases
      - ./monitoring_scripts:/app/monitoring_scripts
      - ./gunicorn.conf.py:/app/gunicorn.conf.py:ro
      - ./ssl:/etc/ssl:ro
      - ./sandbox_data:/app/sandbox_data
      # Docker access for container restart from the API
      - /var/run/docker.sock:/var/run/docker.sock
      - ./docker-compose.prod.yaml:/app/docker-compose.yaml:ro
    depends_on:
      mongo:
        condition: service_healthy
      redis:
        condition: service_healthy
    networks:
      - testhide-net
    healthcheck:
      disable: true
    mem_limit: 2.5g
    memswap_limit: 2.5g
    deploy:
      replicas: 2
      resources:
        limits:
          cpus: "2.0"
          memory: 2.5g
        reservations:
          cpus: "0.5"
          memory: 1g

  # ==========================================
  # AI API (HTTP only — serves /api/v3/ai/ requests)
  # ==========================================
  ai-api:
    container_name: testhide_ai_api
    image: thuesdays/testhide-backend:latest
    restart: unless-stopped
    # Note: No external port - accessed via nginx proxy on 7771
    env_file:
      - .env
    environment:
      - LOAD_AI_MODELS=true
      - RUN_AI_WORKER=false
      - GUNICORN_WORKERS=2
    volumes:
      - testhide_static_data:/app/static
      - ./sandbox_data:/app/sandbox_data
      - ./releases:/app/releases
      - ./monitoring_scripts:/app/monitoring_scripts
      - ./gunicorn.conf.py:/app/gunicorn.conf.py:ro
      - ./ssl:/etc/ssl:ro
    depends_on:
      mongo:
        condition: service_healthy
      redis:
        condition: service_healthy
    networks:
      - testhide-net
    healthcheck:
      disable: true
    mem_limit: 5g
    memswap_limit: 5g
    deploy:
      replicas: 1
      resources:
        limits:
          cpus: "2.0"
          memory: 5g
        reservations:
          cpus: "0.5"
          memory: 3g

  # ==========================================
  # AI WORKER (Background: training, vectorization, diagnostics)
  # ==========================================
  ai-worker:
    container_name: testhide_ai_worker
    image: thuesdays/testhide-backend:latest
    restart: unless-stopped
    env_file:
      - .env
    environment:
      - LOAD_AI_MODELS=true
      - RUN_AI_WORKER=true
      - GUNICORN_WORKERS=1
      # OOM prevention: must be BELOW mem_limit
      - AI_MEMORY_LIMIT_MB=7500
      # Reduce parallelism to lower peak memory
      - AI_ASSIST_INFER_WORKERS=2
      - AI_ASSIST_IO_WORKERS=3
      - AI_ASSIST_LLM_WORKERS=1
    volumes:
      - testhide_static_data:/app/static
      - ./sandbox_data:/app/sandbox_data
      - ./releases:/app/releases
      - ./monitoring_scripts:/app/monitoring_scripts
      - ./gunicorn.conf.py:/app/gunicorn.conf.py:ro
      - ./ssl:/etc/ssl:ro
    depends_on:
      mongo:
        condition: service_healthy
      redis:
        condition: service_healthy
    networks:
      testhide-net:
        aliases:
          - ai-worker
    healthcheck:
      disable: true
    mem_limit: 10g
    memswap_limit: 10g
    deploy:
      replicas: 1
      resources:
        limits:
          cpus: "2.0"
          memory: 10g
        reservations:
          cpus: "0.5"
          memory: 5g

  # ==========================================
  # DATABASES
  # ==========================================
  mongo:
    container_name: testhide_mongo
    image: 'mongo:8.2.3'
    restart: always
    command: [ "mongod", "--auth", "--wiredTigerCacheSizeGB", "2" ]
    environment:
      - MONGO_INITDB_ROOT_USERNAME=${MONGO_USER}
      - MONGO_INITDB_ROOT_PASSWORD=${MONGO_PASS}
    ports:
      - "27017:27017"
    volumes:
      - ${MONGO_DATA_PATH}:/data/db
    networks:
      - testhide-net
    healthcheck:
      test: [ "CMD", "mongosh", "--norc", "--quiet", "-u", "${MONGO_USER}", "-p", "${MONGO_PASS}", "--authenticationDatabase", "admin", "--eval", "db.adminCommand('ping')" ]
      interval: 30s
      timeout: 5s
      retries: 5
    deploy:
      resources:
        limits:
          cpus: "1.5"
          memory: 3g
        reservations:
          cpus: "0.5"
          memory: 1.5g

  redis:
    container_name: testhide_redis
    image: redis:7-alpine
    restart: always
    command: [ "redis-server", "--maxmemory", "512mb", "--maxmemory-policy", "allkeys-lru", "--appendonly", "no", "--requirepass", "${REDIS_PASSWORD}" ]
    networks:
      - testhide-net
    healthcheck:
      test: [ "CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping" ]
      interval: 10s
      timeout: 3s
      retries: 5
    deploy:
      resources:
        limits:
          cpus: "0.5"
          memory: 768m

# ==========================================
# VOLUMES & NETWORKS
# ==========================================
volumes:
  testhide_static_data:
    name: testhide_static_data

networks:
  testhide-net:
    name: testhide-net

Environment Variables (.env)

Create a .env file in the same directory as your docker-compose.yaml. Below is a complete production example grouped by category.

Backend

CORS_ORIGINShttps://testhide.yourdomain.com

Allowed CORS origins for the API.

PORT8080

Port the Python application listens on inside the container.

USE_SSLtrue

Enables SSL/HTTPS support in the backend.

CERT_KEY/etc/ssl/your_domain.key

Path to the SSL private key inside the container.

CERT_FILE/etc/ssl/your_domain.crt

Path to the SSL certificate file inside the container.

MongoDB

MONGO_HOSTmongo

Hostname of the MongoDB service (docker-compose service name).

MONGO_PORT27017

Port for MongoDB connection.

MONGO_DB_NAMEtesthide_database

Name of the MongoDB database.

MONGO_USERtesthide_user

Username for MongoDB authentication.

MONGO_PASS<your_secure_password>

Password for MongoDB authentication. Use a strong random password.

MONGO_AUTH_SOURCEtesthide_database

MongoDB authentication database.

MONGO_DATA_PATH/data/testhide/mongo

Host path for MongoDB data persistence.

Redis

REDIS_HOSTredis

Hostname of the Redis service (docker-compose service name).

REDIS_PORT6379

Port for Redis connection.

REDIS_PASSWORD<your_redis_password>

Password for Redis authentication. Generate with: openssl rand -base64 32

Security

JWT_SECRET<random_secret_string>

Secret key for signing JWT tokens. Generate with: openssl rand -base64 32

Frontend URLs

API_URLhttps://testhide.yourdomain.com:7771

Public API URL accessible to clients and agents.

WS_URLwss://testhide.yourdomain.com:7771

WebSocket URL for real-time communication with agents.

FRONTEND_PUBLIC_URLhttps://testhide.yourdomain.com

Public URL of the frontend application.

PRODUCTIONtrue

Enables production mode optimizations.

Nginx

INTERNAL_API_URLhttp://backend:8080

Nginx uses this to proxy /api/* requests to backend container.

Other

DEBUGfalse

Disables debug logging and features.

LICENSE_API_URLhttps://service.testhide.com/api/v1/

License server API URL. Override only for self-hosted license servers.

Gunicorn

GUNICORN_WORKERS2

Number of Gunicorn worker processes per container.

GUNICORN_TIMEOUT120

Worker timeout in seconds.

GUNICORN_GRACEFUL_TIMEOUT30

Graceful shutdown timeout in seconds.

GUNICORN_KEEPALIVE5

Keep-alive timeout for connections.

GUNICORN_LOG_LEVELinfo

Gunicorn log verbosity (debug, info, warning, error).

GUNICORN_MAX_REQUESTS10000

Auto-recycle workers after N requests to prevent memory leaks.

AI / LLM

AI_LLM_REPObartowski/Phi-3.5-mini-instruct-GGUF

HuggingFace repo for LLM model auto-download.

AI_LLM_FILEPhi-3.5-mini-instruct-Q5_K_M.gguf

GGUF model filename to download and use.

AI_LLM_CTX32768

LLM context window size in tokens.

AI_LLM_THREADS8

Number of threads for LLM inference.

AI_LLM_GPU_LAYERS0

Number of GPU layers to offload (0 = CPU only).

AI_MEMORY_LIMIT_MB6144

Memory threshold (MB) for AI worker pressure check. Must be below container mem_limit.

Jira

JIRA_URLhttps://jira.yourdomain.com/

Jira server URL for bug linking integration.

JIRA_USERNAME<your_jira_username>

Jira service account username.

JIRA_PASSWORD<your_jira_password>

Jira service account password or API token.

SCM

BITBUCKET_API_VERSION1.0

Bitbucket API version (1.0 for Server, 2.0 for Cloud).

YOLO

AI_YOLO_MODELyolov8n.pt

YOLO model size: n (nano), s (small), m (medium), l (large), x (extra).

AI_YOLO_EPOCHS5

Number of fine-tuning epochs.

AI_YOLO_BATCH8

Training batch size.

AI_YOLO_FREEZE10

Number of initial YOLO layers to freeze during fine-tuning.

AI_YOLO_FORCE_CPUfalse

Force CPU inference (when server has no GPU).

AI_YOLO_MIN_ANNOTATIONS10

Minimum number of annotations before triggering training.

AI_YOLO_WEIGHT_CORRECTION10

Training weight multiplier for user corrections.

AI_YOLO_WEIGHT_MANUAL5

Training weight multiplier for manual annotations.

Data Persistence

TestHide uses docker volumes and bind-mounts to persist data across restarts:

  • mongo data (bind-mount)Controlled via MONGO_DATA_PATH. Stores all configuration, user data, test results, and build metadata.Critical to backup.
  • testhide_static_data (volume)Stores build artifacts (binaries, test reports, videos, screenshots) and server logs. Can grow large; configure retention in the UI or use disk monitoring.
  • Bind-mount directories./releases — AI model releases & updates../monitoring_scripts — Custom monitoring plugins../sandbox_data — AI sandbox working data../ssl — SSL certificates.