Logstash 简明教程

Logstash - Output Stage

输出是 Logstash 管道中的最后阶段,它将过滤器数据从输入日志发送到指定的目标。Logstash 提供多个输出插件,以便将经过过滤的日志事件存储到各种不同的存储和搜索引擎。

Output is the last stage in Logstash pipeline, which send the filter data from input logs to a specified destination. Logstash offers multiple output plugins to stash the filtered log events to various different storage and searching engines.

Storing Logs

Logstash 可以将过滤日志存储到 File, Elasticsearch Engine, stdout, AWS CloudWatch, 等网络协议(例如 TCP, UDP, Websocket )也可以在 Logstash 中用于将日志事件传输到远程存储系统。

Logstash can store the filtered logs in a File, Elasticsearch Engine, stdout, AWS CloudWatch, etc. Network protocols like TCP, UDP, Websocket can also be used in Logstash for transferring the log events to remote storage systems.

在 ELK 栈中,用户使用 Elasticsearch 引擎存储日志事件。在这里,在以下示例中,我们将为本地 Elasticsearch 引擎生成日志事件。

In ELK stack, users use the Elasticsearch engine to store the log events. Here, in the following example, we will generate log events for a local Elasticsearch engine.

Installing the Elasticsearch Output Plugin

我们可以使用以下命令安装 Elasticsearch 输出插件。

We can install the Elasticsearch output plugin with the following command.

>logstash-plugin install Logstash-output-elasticsearch

logstash.conf

此配置文件包含 Elasticsearch 插件,该插件将输出事件存储在本地安装的 Elasticsearch 中。

This config file contains an Elasticsearch plugin, which stores the output event in Elasticsearch installed locally.

input {
   file {
      path => "C:/tpwork/logstash/bin/log/input.log"
   }
}
filter {
   grok {
      match => [ "message", "%{LOGLEVEL:loglevel} -
      %{NOTSPACE:taskid} - %{NOTSPACE:logger} -
      %{WORD:label}( - %{INT:duration:int})?" ]
   }
   if [logger] == "TRANSACTION_START" {
      aggregate {
         task_id => "%{taskid}"
         code => "map['sql_duration'] = 0"
         map_action => "create"
      }
   }
   if [logger] == "SQL" {
      aggregate {
         task_id => "%{taskid}"
         code => "map['sql_duration'] ||= 0 ;
            map['sql_duration'] += event.get('duration')"
      }
   }
   if [logger] == "TRANSACTION_END" {
      aggregate {
         task_id => "%{taskid}"
         code => "event.set('sql_duration', map['sql_duration'])"
         end_of_task => true
         timeout => 120
      }
   }
   mutate {
      add_field => {"user" => "tutorialspoint.com"}
   }
}
output {
   elasticsearch {
      hosts => ["127.0.0.1:9200"]
   }
}

Input.log

以下代码块显示了输入日志数据。

The following code block shows the input log data.

INFO - 48566 - TRANSACTION_START - start
INFO - 48566 - SQL - transaction1 - 320
INFO - 48566 - SQL - transaction1 - 200
INFO - 48566 - TRANSACTION_END - end

Start Elasticsearch at Localhost

要在本地主机上启动 Elasticsearch,你应该使用以下命令。

To start Elasticsearch at the localhost, you should use the following command.

C:\elasticsearch\bin> elasticsearch

一旦 Elasticsearch 准备好,你可以通过在浏览器中键入以下 URL 进行检查。

Once Elasticsearch is ready, you can check it by typing the following URL in your browser.

Response

以下代码段显示了本地主机上 Elasticsearch 的响应。

The following code block shows the response of Elasticsearch at localhost.

{
   "name" : "Doctor Dorcas",
   "cluster_name" : "elasticsearch",
   "version" : {
      "number" : "2.1.1",
      "build_hash" : "40e2c53a6b6c2972b3d13846e450e66f4375bd71",
      "build_timestamp" : "2015-12-15T13:05:55Z",
      "build_snapshot" : false,
      "lucene_version" : "5.3.1"
   },
   "tagline" : "You Know, for Search"
}

Note − 有关 Elasticsearch 的更多信息,您可以单击以下链接。

Note − For more information about Elasticsearch, you can click on the following link.

现在,使用上述 Logstash.conf 运行 Logstash

Now, run Logstash with the above-mentioned Logstash.conf

>Logstash –f Logstash.conf

在输出日志中粘贴上述文本后,Logstash 会将该文本存储在 Elasticsearch 中。您可以通过在浏览器中键入以下 URL 检查存储的数据。

After pasting the above-mentioned text in the output log, that text will be stored in Elasticsearch by Logstash. You can check the stored data by typing the following URL in the browser.

Response

它是以 JSON 格式存储在索引 Logstash-2017.01.01 中的数据。

It is the data in JSON format stored in index Logstash-2017.01.01.

{
   "took" : 20,
   "timed_out" : false,
   "_shards" : {
      "total" : 5,
      "successful" : 5,
      "failed" : 0
   },
   "hits" : {
      "total" : 10,
      "max_score" : 1.0,
      "hits" : [ {
         "_index" : "logstash-2017.01.01",
         "_type" : "logs",
         "_id" : "AVlZ9vF8hshdrGm02KOs",
         "_score" : 1.0,
         "_source":{
            "duration":200,"path":"C:/tpwork/logstash/bin/log/input.log",
            "@timestamp":"2017-01-01T12:17:49.140Z","loglevel":"INFO",
            "logger":"SQL","@version":"1","host":"wcnlab-PC",
            "label":"transaction1",
            "message":" INFO - 48566 - SQL - transaction1 - 200\r",
            "user":"tutorialspoint.com","taskid":"48566","tags":[]
         }
      },
      {
         "_index" : "logstash-2017.01.01",
         "_type" : "logs",
         "_id" : "AVlZ9vF8hshdrGm02KOt",
         "_score" : 1.0,
         "_source":{
            "sql_duration":520,"path":"C:/tpwork/logstash/bin/log/input.log",
            "@timestamp":"2017-01-01T12:17:49.145Z","loglevel":"INFO",
            "logger":"TRANSACTION_END","@version":"1","host":"wcnlab-PC",
            "label":"end",
            "message":" INFO - 48566 - TRANSACTION_END - end\r",
            "user":"tutorialspoint.com","taskid":"48566","tags":[]
         }
      }
   }
}