Secure60 Log Analyser is a service that automatically discovers log patterns from your log events. It analyzes incoming log messages, identifies common patterns, and exports pattern statistics to your Secure60 project for intelligent log enrichment and analysis.
The simplest way to deploy Secure60 Log Analyser is alongside your Secure60 Collector. This allows the Collector to automatically send log events to the Log Analyser for pattern discovery.
Add the following lines to your Secure60 Collector’s .env file:
ALTERNATE_LOCATION=http://<hostname-or-ip>:89
ENABLE_ALTERNATE_ENDPOINT=true
Replace <hostname-or-ip> with the IP address or hostname where your Log Analyser will be accessible. The Collector will automatically forward log events to this endpoint.
Create a file named .log-template.env in the same directory as your compose.yaml:
S60_LOG_PATTERN_FLUSH_INTERVAL_SECONDS=120
S60_LOG_PATTERN_MAX=5000
S60_INGEST_ENDPOINT=https://ingest.secure60.io/data/1.0/metrics/project/<project-id>
S60_INGEST_TOKEN=<your-jwt-token>
S60_WEB_CONCURRENCY=6
Replace:
<project-id> with your Secure60 project ID<your-jwt-token> with your Secure60 JWT tokenAdd the Log Analyser service to your compose.yaml file:
services:
s60-collector:
image: "secure60/s60-collector:1.10"
container_name: "s60-collector"
ports:
- "443:443"
- "6514:6514"
env_file:
- .env
restart: 'always'
logging:
driver: "json-file"
options:
max-size: "50m"
max-file: "10"
s60-log-analyser:
image: "secure60/s60-log-analyser:1.03"
container_name: "s60-log-analyser"
ports:
- "89:80"
env_file:
- .log-template.env
restart: 'always'
logging:
driver: "json-file"
options:
max-size: "50m"
max-file: "10"
networks:
hostnet:
name: host
external: true
docker compose up -d
The Log Analyser will start processing log events from the Collector and automatically discover patterns. Discovered patterns are exported to your Secure60 project every 120 seconds (configurable via S60_LOG_PATTERN_FLUSH_INTERVAL_SECONDS).
Secure60 Log Analyser is configured using environment variables. The simplest deployment only requires the ingest endpoint and JWT token. All other settings have sensible defaults.
| Variable | Description |
|---|---|
S60_INGEST_ENDPOINT |
Full URL to your Secure60 ingest endpoint. Example: https://ingest.secure60.io/data/1.0/metrics/project/<project-id> |
S60_INGEST_TOKEN |
JWT token for authenticating with the Secure60 ingest endpoint. Sent as Authorization: Bearer <token> header. |
| Variable | Default | Description |
|---|---|---|
S60_MESSAGE_FIELD_PATH |
Auto-detect | Custom dotted path to the log message field (e.g., log.original). If not set, probes in order: message_text, message, log.original, event.original |
S60_NORMALIZE_TIMESTAMPS |
true |
Enable/disable automatic timestamp normalization. When enabled, timestamps are replaced with <*> wildcards before pattern discovery |
S60_TIMESTAMP_PATTERNS |
Built-in patterns | Comma-separated list of custom regex patterns for timestamp detection. Example: \d{4}-\d{2}-\d{2},\d{2}/\d{2}/\d{4} |
S60_PATTERN_SIMILARITY_MERGE |
true |
Enable/disable automatic merging of very similar patterns |
S60_PATTERN_SIMILARITY_THRESHOLD |
0.90 |
Similarity threshold (0.0-1.0) for pattern merging. Patterns with similarity >= threshold are merged |
| Variable | Default | Description |
|---|---|---|
S60_LOG_PATTERN_MAX |
5000 |
Maximum number of patterns to send each flush. Always selects top N by count |
S60_LOG_PATTERN_FLUSH_INTERVAL_SECONDS |
120 |
How often to flush discovered patterns to the ingest endpoint (in seconds) |
S60_LOG_PATTERN_STDOUT |
false |
When true, prints discovered log patterns to stdout for debugging |
| Variable | Default | Description |
|---|---|---|
S60_WEB_CONCURRENCY |
1 |
Number of worker processes to run. Increase for multi-core systems (recommended: 4-8 for high-throughput deployments) |
S60_LOG_PATTERN_CACHE_MAX |
20000 |
Maximum number of cached pattern mappings. Each worker maintains its own cache |
| Variable | Default | Description |
|---|---|---|
S60_INGEST_CONNECT_TIMEOUT_SECONDS |
10 |
Connection timeout for ingest endpoint requests (in seconds) |
S60_INGEST_READ_TIMEOUT_SECONDS |
15 |
Read timeout for ingest endpoint requests (in seconds) |
S60_INGEST_WRITE_TIMEOUT_SECONDS |
15 |
Write timeout for ingest endpoint requests (in seconds) |
S60_INGEST_POOL_TIMEOUT_SECONDS |
5 |
Connection pool timeout (in seconds) |
| Variable | Default | Description |
|---|---|---|
S60_DEBUG_LOG_PATH |
Not set | Path to write debug logs (NDJSON format). Example: /debug/s60-log-analyser.debug.ndjson |
S60_DEBUG_RUN_ID |
Not set | Run identifier for debug logs. Useful for correlating logs across restarts |
The Log Analyser receives JSON log events via HTTP POST. It extracts the log message from the event (using the configured field path) and analyzes it to discover patterns:
message_text, message, log.original, event.original) or uses a custom field path<*> wildcards before processing<*> wildcardsPeriodically (every S60_LOG_PATTERN_FLUSH_INTERVAL_SECONDS), the service exports the top N patterns (by count) to your Secure60 ingest endpoint:
When enabled, the service automatically merges very similar patterns that differ only slightly. This reduces duplicate patterns and improves pattern quality:
POST /Ingest log events for pattern discovery. Accepts either a single JSON object or an array of JSON objects.
Request Example:
curl -X POST http://localhost:89/ \
-H 'Content-Type: application/json' \
-d '{"message_text":"User login failed for username: \"john\" , please check username and password"}'
Response:
{
"ok": true,
"log_pattern_id": "a1b2c3d4e5f67890",
"discovered": true,
"from_cache": false,
"field_used": "message_text"
}
Batch Request Example:
curl -X POST http://localhost:89/ \
-H 'Content-Type: application/json' \
-d '[
{"message_text":"User login failed for username: \"john\" , please check username and password"},
{"message_text":"User login failed for username: \"bob\" , please check username and password"}
]'
GET /healthHealth check endpoint.
Response:
{
"ok": true
}
GET /stats/log-patternsReturns the top 25 pending log patterns by count.
Response:
{
"ok": true,
"pending_top": [
{
"log_pattern_id": "a1b2c3d4e5f67890",
"pending_count": 150,
"total_count": 150
}
]
}
GET /stats/flushReturns flush statistics and configuration.
Response:
{
"ok": true,
"pid": 12345,
"flush_interval_s": 120,
"ingest_client_disconnects": 0,
"ingest_endpoint": {
"scheme": "https",
"host": "ingest.secure60.io",
"port": 443,
"path": "/data/1.0/metrics/project/<project-id>"
},
"ingest_jwt_set": true,
"pending_patterns": 42,
"pending_total_count": 1250,
"flush_last_ok_unix": 1704067200.0,
"flush_last_error": null,
"flush_consecutive_failures": 0
}
For high-throughput deployments, consider:
S60_WEB_CONCURRENCY=6 or higher (one per CPU core)S60_LOG_PATTERN_CACHE_MAX if you have many unique patternsIf your log events use non-standard field names, set S60_MESSAGE_FIELD_PATH to a dotted path:
S60_MESSAGE_FIELD_PATH=log.original
Or for nested fields:
S60_MESSAGE_FIELD_PATH=event.log.message
If your logs use non-standard timestamp formats, provide custom regex patterns:
S60_TIMESTAMP_PATTERNS=\d{4}-\d{2}-\d{2},\d{2}/\d{2}/\d{4}
Enable debug logging to troubleshoot pattern discovery:
S60_DEBUG_LOG_PATH=/debug/s60-log-analyser.debug.ndjson
S60_DEBUG_RUN_ID=prod-run1
Mount a volume to persist debug logs:
volumes:
- ./debug-logs:/debug
Adjust similarity merging behavior:
S60_PATTERN_SIMILARITY_MERGE=false to keep all patterns separateS60_PATTERN_SIMILARITY_THRESHOLD=0.95 to only merge very similar patternsS60_PATTERN_SIMILARITY_THRESHOLD=0.85 to merge more patternsIf you experience connection timeouts, increase timeout values:
S60_INGEST_CONNECT_TIMEOUT_SECONDS=30
S60_INGEST_READ_TIMEOUT_SECONDS=30
S60_INGEST_WRITE_TIMEOUT_SECONDS=30
The Log Analyser can also be deployed standalone (without the Collector) and receive events directly:
docker run --rm \
-p 89:80 \
--env-file .log-template.env \
--name s60-log-analyser \
secure60/s60-log-analyser:1.03
Send events directly to the service:
curl -X POST http://localhost:89/ \
-H 'Content-Type: application/json' \
-d '{"message_text":"Your log message here"}'
Start with defaults — The default configuration works well for most deployments. Only adjust settings if you encounter specific issues or have high-throughput requirements.
Monitor flush statistics — Use the /stats/flush endpoint to monitor pattern export health and identify connection issues.
Use appropriate worker count — Set S60_WEB_CONCURRENCY to match your CPU cores for optimal performance.
Enable debug logging for troubleshooting — When investigating pattern discovery issues, enable debug logging to see what patterns are being discovered.
Adjust flush interval based on volume — For high-volume deployments, increase S60_LOG_PATTERN_FLUSH_INTERVAL_SECONDS to reduce API call frequency. For low-volume deployments, decrease it for more real-time updates.
Keep patterns manageable — Use S60_LOG_PATTERN_MAX to limit the number of patterns exported per flush. Focus on the most frequent patterns.
Use similarity merging — Keep S60_PATTERN_SIMILARITY_MERGE=true to reduce duplicate patterns and improve pattern quality.