收集 Kubernetes 日志并将其转发到 Loki
您可以配置 Alloy 来收集日志并将其转发到 Loki 数据库。
本主题介绍如何:
- 配置日志交付。
- 从 Kubernetes Pods 收集日志。
本主题中使用的组件
discovery.kubernetes
discovery.relabel
local.file_match
loki.source.file
loki.source.kubernetes
loki.source.kubernetes_events
loki.process
loki.write
开始之前
- 确保您熟悉使用 Loki 时的日志标签。
- 确定将收集的日志写入何处。您可以将日志写入 Loki 端点,例如 Grafana Loki、Grafana Cloud 或 Grafana Enterprise Logs。
- 熟悉 Alloy 中的组件概念。
配置日志交付
在组件收集日志之前,您必须有一个负责将这些日志写入某个位置的组件。
loki.write
组件将日志交付到 Loki 端点。定义 loki.write
组件后,您可以使用其他 Alloy 组件将其日志转发到它。
要配置用于日志交付的 loki.write
组件,请完成以下步骤:
将以下
loki.write
组件添加到您的配置文件中。loki.write "<LABEL>" { endpoint { url = "<LOKI_URL>" } }
替换以下内容:
<LABEL>
:组件的标签,例如default
。您使用的标签必须在同一配置文件中的所有loki.write
组件中唯一。<LOKI_URL>
:发送日志的 Loki 端点的完整 URL,例如https://logs-us-central1.grafana.net/loki/api/v1/push
。
如果您的端点需要基本身份验证,请将以下内容粘贴到
endpoint
块内。basic_auth { username = "<USERNAME>" password = "<PASSWORD>" }
替换以下内容:
<USERNAME>
:基本身份验证用户名。<PASSWORD>
:基本身份验证密码或 API 密钥。
如果您有多个写入日志的端点,请为其他端点重复
endpoint
块。
以下简单示例演示了配置具有多个端点、混合使用基本身份验证的 loki.write
以及从 Alloy 容器上的文件系统收集日志的 loki.source.file
组件。
loki.write "default" {
endpoint {
url = "https://:3100/loki/api/v1/push"
}
endpoint {
url = "https://logs-us-central1.grafana.net/loki/api/v1/push"
// Get basic authentication based on environment variables.
basic_auth {
username = "<USERNAME>"
password = "<PASSWORD>"
}
}
}
loki.source.file "example" {
// Collect logs from the default listen address.
targets = [
{__path__ = "/tmp/foo.txt", "color" = "pink"},
{__path__ = "/tmp/bar.txt", "color" = "blue"},
{__path__ = "/tmp/baz.txt", "color" = "grey"},
]
forward_to = [loki.write.default.receiver]
}
替换以下内容:
<USERNAME>
:远程写入用户名。<PASSWORD>
:远程写入密码。
有关配置日志交付的更多信息,请参阅 loki.write。
从 Kubernetes 收集日志
您可以配置 Alloy 从 Kubernetes 收集各种类型的日志:
- 系统日志
- Pods 日志
- Kubernetes 事件
借助组件架构,您可以按照以下一个或所有部分来获取这些日志。在您遵循配置日志交付以确保收集的日志可以写入某个位置后,跳转到相关部分。
系统日志
要获取系统日志,您应使用以下组件:
local.file_match
:发现本地文件系统上的文件。loki.source.file
:从文件中读取日志条目。loki.write
:将日志发送到 Loki 端点。您应已在配置日志交付部分中配置了它。
以下是一个使用这些阶段的示例。
// local.file_match discovers files on the local filesystem using glob patterns and the doublestar library. It returns an array of file paths.
local.file_match "node_logs" {
path_targets = [{
// Monitor syslog to scrape node-logs
__path__ = "/var/log/syslog",
job = "node/syslog",
node_name = sys.env("HOSTNAME"),
cluster = <CLUSTER_NAME>,
}]
}
// loki.source.file reads log entries from files and forwards them to other loki.* components.
// You can specify multiple loki.source.file components by giving them different labels.
loki.source.file "node_logs" {
targets = local.file_match.node_logs.targets
forward_to = [loki.write.<WRITE_COMPONENT_NAME>.receiver]
}
替换以下值:
<CLUSTER_NAME>
:此特定 Kubernetes 集群的标签,例如production
或us-east-1
。<WRITE_COMPONENT_NAME>
:您的loki.write
组件的名称,例如default
。
Pods 日志
提示
您可以通过每个节点上的日志文件获取 Pod 日志。在本指南中,您通过 Kubernetes API 获取日志,因为它不需要 Alloy 的系统权限。
您需要以下组件:
discovery.kubernetes
:发现 Pod 信息并列出供组件使用。discovery.relabel
:对 Pod 列表强制执行重新标记策略。loki.source.kubernetes
:跟踪 Kubernetes Pod 目标的日志列表。loki.process
:在将日志发送到下一个组件之前修改日志。loki.write
:将日志发送到 Loki 端点。您应已在配置日志交付部分中配置了它。
以下是使用这些阶段的示例:
// discovery.kubernetes allows you to find scrape targets from Kubernetes resources.
// It watches cluster state and ensures targets are continually synced with what is currently running in your cluster.
discovery.kubernetes "pod" {
role = "pod"
}
// discovery.relabel rewrites the label set of the input targets by applying one or more relabeling rules.
// If no rules are defined, then the input targets are exported as-is.
discovery.relabel "pod_logs" {
targets = discovery.kubernetes.pod.targets
// Label creation - "namespace" field from "__meta_kubernetes_namespace"
rule {
source_labels = ["__meta_kubernetes_namespace"]
action = "replace"
target_label = "namespace"
}
// Label creation - "pod" field from "__meta_kubernetes_pod_name"
rule {
source_labels = ["__meta_kubernetes_pod_name"]
action = "replace"
target_label = "pod"
}
// Label creation - "container" field from "__meta_kubernetes_pod_container_name"
rule {
source_labels = ["__meta_kubernetes_pod_container_name"]
action = "replace"
target_label = "container"
}
// Label creation - "app" field from "__meta_kubernetes_pod_label_app_kubernetes_io_name"
rule {
source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_name"]
action = "replace"
target_label = "app"
}
// Label creation - "job" field from "__meta_kubernetes_namespace" and "__meta_kubernetes_pod_container_name"
// Concatenate values __meta_kubernetes_namespace/__meta_kubernetes_pod_container_name
rule {
source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_container_name"]
action = "replace"
target_label = "job"
separator = "/"
replacement = "$1"
}
// Label creation - "container" field from "__meta_kubernetes_pod_uid" and "__meta_kubernetes_pod_container_name"
// Concatenate values __meta_kubernetes_pod_uid/__meta_kubernetes_pod_container_name.log
rule {
source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
action = "replace"
target_label = "__path__"
separator = "/"
replacement = "/var/log/pods/*$1/*.log"
}
// Label creation - "container_runtime" field from "__meta_kubernetes_pod_container_id"
rule {
source_labels = ["__meta_kubernetes_pod_container_id"]
action = "replace"
target_label = "container_runtime"
regex = "^(\\S+):\\/\\/.+$"
replacement = "$1"
}
}
// loki.source.kubernetes tails logs from Kubernetes containers using the Kubernetes API.
loki.source.kubernetes "pod_logs" {
targets = discovery.relabel.pod_logs.output
forward_to = [loki.process.pod_logs.receiver]
}
// loki.process receives log entries from other Loki components, applies one or more processing stages,
// and forwards the results to the list of receivers in the component's arguments.
loki.process "pod_logs" {
stage.static_labels {
values = {
cluster = "<CLUSTER_NAME>",
}
}
forward_to = [loki.write.<WRITE_COMPONENT_NAME>.receiver]
}
替换以下值:
<CLUSTER_NAME>
:此特定 Kubernetes 集群的标签,例如production
或us-east-1
。<WRITE_COMPONENT_NAME>
:您的loki.write
组件的名称,例如default
。
Kubernetes 集群事件
您需要以下组件:
loki.source.kubernetes_events
:从 Kubernetes API 跟踪事件。loki.process
:在将日志发送到下一个组件之前修改日志。loki.write
:将日志发送到 Loki 端点。您应已在配置日志交付部分中配置了它。
以下是使用这些阶段的示例:
// loki.source.kubernetes_events tails events from the Kubernetes API and converts them
// into log lines to forward to other Loki components.
loki.source.kubernetes_events "cluster_events" {
job_name = "integrations/kubernetes/eventhandler"
log_format = "logfmt"
forward_to = [
loki.process.cluster_events.receiver,
]
}
// loki.process receives log entries from other loki components, applies one or more processing stages,
// and forwards the results to the list of receivers in the component's arguments.
loki.process "cluster_events" {
forward_to = [loki.write.<WRITE_COMPONENT_NAME>.receiver]
stage.static_labels {
values = {
cluster = "<CLUSTER_NAME>",
}
}
stage.labels {
values = {
kubernetes_cluster_events = "job",
}
}
}
替换以下值:
<CLUSTER_NAME>
:此特定 Kubernetes 集群的标签,例如production
或us-east-1
。<WRITE_COMPONENT_NAME>
:您的loki.write
组件的名称,例如default
。