Secure60 Log Analyser

Overview

Secure60 Log Analyser is a service that automatically discovers log patterns from your log events. It analyzes incoming log messages, identifies common patterns, and exports pattern statistics to your Secure60 project for intelligent log enrichment and analysis.

Key Capabilities

Quick Start

The simplest way to deploy Secure60 Log Analyser is alongside your Secure60 Collector. This allows the Collector to automatically send log events to the Log Analyser for pattern discovery.

Step 1: Configure the Collector

Add the following lines to your Secure60 Collector’s .env file:

ALTERNATE_LOCATION=http://<hostname-or-ip>:89
ENABLE_ALTERNATE_ENDPOINT=true

Replace <hostname-or-ip> with the IP address or hostname where your Log Analyser will be accessible. The Collector will automatically forward log events to this endpoint.

Step 2: Create the Log Analyser Environment File

Create a file named .log-template.env in the same directory as your compose.yaml:

S60_LOG_PATTERN_FLUSH_INTERVAL_SECONDS=120
S60_LOG_PATTERN_MAX=5000
S60_INGEST_ENDPOINT=https://ingest.secure60.io/data/1.0/metrics/project/<project-id>
S60_INGEST_TOKEN=<your-jwt-token>
S60_WEB_CONCURRENCY=6

Replace:

Step 3: Add to Docker Compose

Add the Log Analyser service to your compose.yaml file:

services:
  s60-collector:
    image: "secure60/s60-collector:1.10"
    container_name: "s60-collector"
    ports:
      - "443:443"
      - "6514:6514"
    env_file:
      - .env
    restart: 'always'
    logging:
      driver: "json-file"
      options:
        max-size: "50m"
        max-file: "10"
  
  s60-log-analyser:
    image: "secure60/s60-log-analyser:1.03"
    container_name: "s60-log-analyser"
    ports:
      - "89:80"
    env_file:
      - .log-template.env
    restart: 'always'
    logging:
      driver: "json-file"
      options:
        max-size: "50m"
        max-file: "10"

networks:
  hostnet:
    name: host
    external: true

Step 4: Start the Services

docker compose up -d

The Log Analyser will start processing log events from the Collector and automatically discover patterns. Discovered patterns are exported to your Secure60 project every 120 seconds (configurable via S60_LOG_PATTERN_FLUSH_INTERVAL_SECONDS).

Configuration Reference

Secure60 Log Analyser is configured using environment variables. The simplest deployment only requires the ingest endpoint and JWT token. All other settings have sensible defaults.

Required Configuration

Variable Description
S60_INGEST_ENDPOINT Full URL to your Secure60 ingest endpoint. Example: https://ingest.secure60.io/data/1.0/metrics/project/<project-id>
S60_INGEST_TOKEN JWT token for authenticating with the Secure60 ingest endpoint. Sent as Authorization: Bearer <token> header.

Pattern Discovery Configuration

Variable Default Description
S60_MESSAGE_FIELD_PATH Auto-detect Custom dotted path to the log message field (e.g., log.original). If not set, probes in order: message_text, message, log.original, event.original
S60_NORMALIZE_TIMESTAMPS true Enable/disable automatic timestamp normalization. When enabled, timestamps are replaced with <*> wildcards before pattern discovery
S60_TIMESTAMP_PATTERNS Built-in patterns Comma-separated list of custom regex patterns for timestamp detection. Example: \d{4}-\d{2}-\d{2},\d{2}/\d{2}/\d{4}
S60_PATTERN_SIMILARITY_MERGE true Enable/disable automatic merging of very similar patterns
S60_PATTERN_SIMILARITY_THRESHOLD 0.90 Similarity threshold (0.0-1.0) for pattern merging. Patterns with similarity >= threshold are merged

Export/Flush Configuration

Variable Default Description
S60_LOG_PATTERN_MAX 5000 Maximum number of patterns to send each flush. Always selects top N by count
S60_LOG_PATTERN_FLUSH_INTERVAL_SECONDS 120 How often to flush discovered patterns to the ingest endpoint (in seconds)
S60_LOG_PATTERN_STDOUT false When true, prints discovered log patterns to stdout for debugging

Performance Configuration

Variable Default Description
S60_WEB_CONCURRENCY 1 Number of worker processes to run. Increase for multi-core systems (recommended: 4-8 for high-throughput deployments)
S60_LOG_PATTERN_CACHE_MAX 20000 Maximum number of cached pattern mappings. Each worker maintains its own cache

Network Timeout Configuration

Variable Default Description
S60_INGEST_CONNECT_TIMEOUT_SECONDS 10 Connection timeout for ingest endpoint requests (in seconds)
S60_INGEST_READ_TIMEOUT_SECONDS 15 Read timeout for ingest endpoint requests (in seconds)
S60_INGEST_WRITE_TIMEOUT_SECONDS 15 Write timeout for ingest endpoint requests (in seconds)
S60_INGEST_POOL_TIMEOUT_SECONDS 5 Connection pool timeout (in seconds)

Debug Configuration

Variable Default Description
S60_DEBUG_LOG_PATH Not set Path to write debug logs (NDJSON format). Example: /debug/s60-log-analyser.debug.ndjson
S60_DEBUG_RUN_ID Not set Run identifier for debug logs. Useful for correlating logs across restarts

How It Works

Pattern Discovery

The Log Analyser receives JSON log events via HTTP POST. It extracts the log message from the event (using the configured field path) and analyzes it to discover patterns:

  1. Message Extraction: The service looks for the log message in common fields (message_text, message, log.original, event.original) or uses a custom field path
  2. Timestamp Normalization: If enabled, timestamps and dates are normalized to <*> wildcards before processing
  3. Pattern Generation: The log message is analyzed to identify variable parts, which are replaced with <*> wildcards
  4. Pattern Aggregation: Patterns are tracked in-memory with counts of how many times each pattern has been seen

Pattern Export

Periodically (every S60_LOG_PATTERN_FLUSH_INTERVAL_SECONDS), the service exports the top N patterns (by count) to your Secure60 ingest endpoint:

Pattern Similarity Merging

When enabled, the service automatically merges very similar patterns that differ only slightly. This reduces duplicate patterns and improves pattern quality:

API Endpoints

POST /

Ingest log events for pattern discovery. Accepts either a single JSON object or an array of JSON objects.

Request Example:

curl -X POST http://localhost:89/ \
  -H 'Content-Type: application/json' \
  -d '{"message_text":"User login failed for username: \"john\" , please check username and password"}'

Response:

{
  "ok": true,
  "log_pattern_id": "a1b2c3d4e5f67890",
  "discovered": true,
  "from_cache": false,
  "field_used": "message_text"
}

Batch Request Example:

curl -X POST http://localhost:89/ \
  -H 'Content-Type: application/json' \
  -d '[
    {"message_text":"User login failed for username: \"john\" , please check username and password"},
    {"message_text":"User login failed for username: \"bob\" , please check username and password"}
  ]'

GET /health

Health check endpoint.

Response:

{
  "ok": true
}

GET /stats/log-patterns

Returns the top 25 pending log patterns by count.

Response:

{
  "ok": true,
  "pending_top": [
    {
      "log_pattern_id": "a1b2c3d4e5f67890",
      "pending_count": 150,
      "total_count": 150
    }
  ]
}

GET /stats/flush

Returns flush statistics and configuration.

Response:

{
  "ok": true,
  "pid": 12345,
  "flush_interval_s": 120,
  "ingest_client_disconnects": 0,
  "ingest_endpoint": {
    "scheme": "https",
    "host": "ingest.secure60.io",
    "port": 443,
    "path": "/data/1.0/metrics/project/<project-id>"
  },
  "ingest_jwt_set": true,
  "pending_patterns": 42,
  "pending_total_count": 1250,
  "flush_last_ok_unix": 1704067200.0,
  "flush_last_error": null,
  "flush_consecutive_failures": 0
}

Advanced Options

Performance Tuning

For high-throughput deployments, consider:

Custom Message Field Paths

If your log events use non-standard field names, set S60_MESSAGE_FIELD_PATH to a dotted path:

S60_MESSAGE_FIELD_PATH=log.original

Or for nested fields:

S60_MESSAGE_FIELD_PATH=event.log.message

Custom Timestamp Patterns

If your logs use non-standard timestamp formats, provide custom regex patterns:

S60_TIMESTAMP_PATTERNS=\d{4}-\d{2}-\d{2},\d{2}/\d{2}/\d{4}

Debug Logging

Enable debug logging to troubleshoot pattern discovery:

S60_DEBUG_LOG_PATH=/debug/s60-log-analyser.debug.ndjson
S60_DEBUG_RUN_ID=prod-run1

Mount a volume to persist debug logs:

volumes:
  - ./debug-logs:/debug

Pattern Similarity Tuning

Adjust similarity merging behavior:

Network Timeout Tuning

If you experience connection timeouts, increase timeout values:

S60_INGEST_CONNECT_TIMEOUT_SECONDS=30
S60_INGEST_READ_TIMEOUT_SECONDS=30
S60_INGEST_WRITE_TIMEOUT_SECONDS=30

Standalone Deployment

The Log Analyser can also be deployed standalone (without the Collector) and receive events directly:

docker run --rm \
  -p 89:80 \
  --env-file .log-template.env \
  --name s60-log-analyser \
  secure60/s60-log-analyser:1.03

Send events directly to the service:

curl -X POST http://localhost:89/ \
  -H 'Content-Type: application/json' \
  -d '{"message_text":"Your log message here"}'

Best Practices

  1. Start with defaults — The default configuration works well for most deployments. Only adjust settings if you encounter specific issues or have high-throughput requirements.

  2. Monitor flush statistics — Use the /stats/flush endpoint to monitor pattern export health and identify connection issues.

  3. Use appropriate worker count — Set S60_WEB_CONCURRENCY to match your CPU cores for optimal performance.

  4. Enable debug logging for troubleshooting — When investigating pattern discovery issues, enable debug logging to see what patterns are being discovered.

  5. Adjust flush interval based on volume — For high-volume deployments, increase S60_LOG_PATTERN_FLUSH_INTERVAL_SECONDS to reduce API call frequency. For low-volume deployments, decrease it for more real-time updates.

  6. Keep patterns manageable — Use S60_LOG_PATTERN_MAX to limit the number of patterns exported per flush. Focus on the most frequent patterns.

  7. Use similarity merging — Keep S60_PATTERN_SIMILARITY_MERGE=true to reduce duplicate patterns and improve pattern quality.

Back to top