# Loki Integration

### Overview

Grafana Loki is a horizontally scalable, highly available, multi-tenant log aggregation system inspired by Prometheus. Unlike traditional log systems, Loki indexes only metadata (labels) rather than full log content, making it cost-efficient at scale. When alert rules defined in the Loki Ruler detect error conditions or anomalies in log streams, Loki routes those alerts through an external Alertmanager, which delivers a structured webhook payload to the platform.

This integration supports automatic alert creation on firing events and automatic resolution when Alertmanager sends a resolved notification.

### Integration Flow

1. Loki continuously ingests log streams from applications and infrastructure.
2. The Loki Ruler evaluates LogQL alert rules against incoming log data at a configured interval.
3. When a rule condition is met, the Ruler sends the alert to an external Alertmanager.
4. Alertmanager groups the alerts and delivers a webhook POST request to the platform endpoint.
5. When the log condition clears, Alertmanager sends a `resolved` notification and the platform automatically closes the alert.

### Webhook Payload Schema

The payload delivered to the platform follows the standard Prometheus Alertmanager webhook format (version 4).

```json
{
  "receiver": "string",
  "status": "firing | resolved",
  "alerts": [
    {
      "status": "firing | resolved",
      "labels": {
        "alertname": "string",
        "severity": "string",
        "env": "string"
      },
      "annotations": {
        "summary": "string",
        "description": "string"
      },
      "startsAt": "ISO8601 timestamp",
      "endsAt": "ISO8601 timestamp",
      "generatorURL": "string",
      "fingerprint": "string"
    }
  ],
  "groupLabels": {},
  "commonLabels": {
    "alertname": "string",
    "severity": "string"
  },
  "commonAnnotations": {
    "summary": "string",
    "description": "string"
  },
  "externalURL": "string",
  "version": "4",
  "groupKey": "string",
  "truncatedAlerts": 0
}
```

***

### Setup

#### Step 1 — Create an Alert Source on the Platform

1. Navigate to **Sources** → **Add Source**.
2. Search for **Loki** and select it.
3. Give the source a name and click **Save**.
4. Copy the generated **Webhook URL** and **Token**.

#### Step 2 — Install and Configure Alertmanager

Loki does not include a built-in Alertmanager. You must run an external Prometheus Alertmanager and point Loki's Ruler at it.

Install Alertmanager using your preferred method (binary, Docker, Helm). Then configure it to forward alerts to the platform:

**`alertmanager.yml`**

```yaml
global:
  resolve_timeout: 5m

route:
  receiver: itoc360-webhook
  group_wait: 10s
  group_interval: 1m
  repeat_interval: 4h

receivers:
  - name: itoc360-webhook
    webhook_configs:
      - url: "https://<your-platform-url>/functions/v1/events?token=<your-source-token>"
        send_resolved: true
```

> `send_resolved: true` is required for automatic alert resolution on the platform.

#### Step 3 — Configure the Loki Ruler

Point the Loki Ruler to your external Alertmanager in `loki-config.yaml`:

```yaml
ruler:
  alertmanager_url: http://<alertmanager-host>:9093
  storage:
    type: local
    local:
      directory: /etc/loki/rules
  enable_api: true
```

#### Step 4 — Create Alert Rules

Create rule files in the configured rules directory. Loki uses **LogQL** for alert expressions, allowing you to alert on log content, patterns, and rates.

**Example: `rules/production.yaml`**

```yaml
groups:
  - name: loki-critical
    interval: 1m
    rules:
      - alert: HighErrorRate
        expr: |
          sum(rate({app="api"} |= "ERROR" [5m])) > 10
        for: 2m
        labels:
          severity: critical
        annotations:
          summary: "High error rate detected in API logs"
          description: "Application {{ $labels.app }} is producing more than 10 errors per second over the last 5 minutes."

      - alert: StackOverflowException
        expr: |
          count_over_time({app=~".+"} |= "StackOverflow" [5m]) > 0
        for: 0m
        labels:
          severity: critical
        annotations:
          summary: "StackOverflow exception detected in logs"
          description: "StackOverflow exception found in log stream {{ $labels.app }}."
```

> The `severity` label in `labels` is used by the platform for priority mapping (see table below).

> Loki alert rules use **LogQL** syntax — log stream selectors (`{app="api"}`) combined with filter expressions (`|= "ERROR"`) and metric queries (`rate`, `count_over_time`).

#### Step 5 — Verify the Integration

After starting Loki and Alertmanager:

1. Check Alertmanager UI at `http://<alertmanager-host>:9093` — active alerts should appear there first.
2. Trigger a test rule and wait for the `group_wait` period.
3. Confirm the alert appears on the platform under the source you created.

### Sample Payload

The following is a real payload captured during integration testing.

**ALERT (firing):**

```json
{
  "receiver": "itoc360-webhook",
  "status": "firing",
  "alerts": [
    {
      "status": "firing",
      "labels": {
        "alertname": "HighErrorRate",
        "env": "api",
        "severity": "critical"
      },
      "annotations": {
        "summary": "High error rate detected in API logs",
        "description": "Application api is producing more than 10 errors per second over the last 5 minutes."
      },
      "startsAt": "2026-03-10T10:44:47.805Z",
      "endsAt": "0001-01-01T00:00:00Z",
      "generatorURL": "/graph?g0.expr=...",
      "fingerprint": "34e164e9af873ac1"
    }
  ],
  "groupLabels": {},
  "commonLabels": {
    "alertname": "HighErrorRate",
    "env" : "test",
    "severity": "critical"
  },
  "commonAnnotations": {
    "summary": "High error rate detected in API logs"
  },
  "externalURL": "http://alertmanager:9093",
  "version": "4",
  "groupKey": "{}:{}",
  "truncatedAlerts": 0
}
```

**RESOLVE (resolved):**

```json
{
  "receiver": "itoc360-webhook",
  "status": "resolved",
  "alerts": [
    {
      "status": "resolved",
      "labels": {
        "alertname": "HighErrorRate",
        "severity": "critical"
      },
      "annotations": {
        "summary": "High error rate detected in API logs"
      },
      "startsAt": "2026-03-10T10:44:47.805Z",
      "endsAt": "2026-03-10T11:01:00.000Z",
      "fingerprint": "34e164e9af873ac1"
    }
  ],
  "status": "resolved",
  "version": "4"
}
```

### Field Mapping Reference

| Payload Field                       | Description                                                           |
| ----------------------------------- | --------------------------------------------------------------------- |
| `status`                            | Top-level event type: `firing` → ALERT, `resolved` → RESOLVE          |
| `alerts[0].fingerprint`             | Unique identifier per alert label set — used for fingerprint matching |
| `alerts[0].labels.alertname`        | Name of the alert rule that fired                                     |
| `alerts[0].labels.severity`         | Severity label from the rule definition — used for priority mapping   |
| `alerts[0].annotations.summary`     | Short human-readable alert title                                      |
| `alerts[0].annotations.description` | Detailed description of the alert condition                           |
| `alerts[0].startsAt`                | ISO 8601 timestamp when the alert started firing                      |
| `alerts[0].endsAt`                  | ISO 8601 timestamp when resolved (`0001-...` means still active)      |
| `commonLabels`                      | Labels shared across all alerts in this group                         |
| `commonAnnotations`                 | Annotations shared across all alerts in this group                    |
| `groupKey`                          | Alertmanager group key used for deduplication                         |

### Priority Mapping

The platform maps the `severity` label from the alert rule to an internal priority level.

| Loki `severity` Label | Platform Priority |
| --------------------- | ----------------- |
| `critical`            | CRITICAL          |
| `error`               | HIGH              |
| `warning`             | MEDIUM            |
| `info`                | LOW               |
| *(not set)*           | MEDIUM (default)  |

> You control the `severity` label in your alert rule definitions. Use consistent values across your rule files for predictable priority routing.

### RESOLVE Detection

The platform automatically resolves an alert when Alertmanager sends a payload with `"status": "resolved"`. This requires `send_resolved: true` in your Alertmanager webhook configuration (set in Step 2).

The resolved event is matched to the original alert using the `fingerprint` field, which Alertmanager generates deterministically from the alert's label set. As long as the labels do not change between firing and resolution, the fingerprint will match and the alert will be closed.
