Kibana 简明教程

Kibana - Quick Guide

Kibana - Overview

Kibana 是一个基于浏览器的开源可视化工具,主要用于分析大量日志,以折线图、条形图、饼图、热图、区域图、坐标图、仪表、目标、时间线等形式呈现。该可视化功能可以轻松预测或查看错误趋势或输入源中其他重要事件的变化。Kibana与Elasticsearch和Logstash同步工作,它们共同构成了所谓的 ELK 堆栈。

Kibana is an open source browser based visualization tool mainly used to analyse large volume of logs in the form of line graph, bar graph, pie charts , heat maps, region maps, coordinate maps, gauge, goals, timelion etc. The visualization makes it easy to predict or to see the changes in trends of errors or other significant events of the input source.Kibana works in sync with Elasticsearch and Logstash which together forms the so called ELK stack.

What is ELK Stack?

ELK 代表Elasticsearch、Logstash和Kibana。 ELK 是全球用于日志分析的一个流行的日志管理平台。在ELK堆栈中,Logstash从不同的输入源中提取日志数据或其他事件。它处理事件,然后将它们存储在Elasticsearch中。

ELK stands for Elasticsearch, Logstash, and Kibana. ELK is one of the popular log management platform used worldwide for log analysis. In the ELK stack, Logstash extracts the logging data or other events from different input sources. It processes the events and later stores them in Elasticsearch.

Kibana 是一个可视化工具,它可以访问Elasticsearch中的日志,并能够以折线图、条形图、饼图等形式显示给用户。

Kibana is a visualization tool, which accesses the logs from Elasticsearch and is able to display to the user in the form of line graph, bar graph, pie charts etc.

ELK堆栈的基本流程在此图像中显示:

The basic flow of ELK Stack is shown in the image here −

elk stack

Logstash负责从存储日志的所有远程源收集数据,并将其推送到Elasticsearch。

Logstash is responsible to collect the data from all the remote sources where the logs are filed and pushes the same to Elasticsearch.

Elasticsearch充当数据库,其中收集数据,而Kibana使用Elasticsearch中的数据以条形图、饼图、热图形式向用户表示数据,如下所示:

Elasticsearch acts as a database where the data is collected and Kibana uses the data from Elasticsearch to represent the data to the user in the form of bargraphs, pie charts, heat maps as shown below −

elastic search

它向用户实时(例如逐日或每小时)显示数据。Kibana UI用户友好,非常容易让初学者理解。

It shows the data on real time basis, for example, day-wise or hourly to the user. Kibana UI is user friendly and very easy for a beginner to understand.

Features of Kibana

Kibana为其用户提供以下功能:

Kibana offers its users the following features −

Visualization

Kibana 拥有多种方式可轻松实现数据可视化。一些常见的包括垂直条形图、水平条形图、饼图、折线图、热图等。

Kibana has a lot of ways to visualize data in an easy way. Some of the ones which are commonly used are vertical bar chart, horizontal bar chart, pie chart, line graph, heat map etc.

Dashboard

当我们准备好可视化时,所有内容均可放在一个看板上,即仪表板。一起观察不同的部分有助于您全面了解事件的具体情况。

When we have the visualizations ready, all of them can be placed on one board – the Dashboard. Observing different sections together gives you a clear overall idea about what exactly is happening.

Dev Tools

您可以使用开发工具来处理索引。初学者可以从开发工具添加虚拟索引,还可以添加、更新、删除数据并使用索引创建可视化效果。

You can work with your indexes using dev tools. Beginners can add dummy indexes from dev tools and also add, update, delete the data and use the indexes to create visualization.

Reports

所有形式的可视化和仪表板数据都可以转换为报告(CSV 格式),嵌入到代码中或转换为要与他人共享的 URL。

All the data in the form of visualization and dashboard can be converted to reports (CSV format), embedded in the code or in the form of URLs to be shared with others.

Filters and Search query

您可以使用筛选器和搜索查询从仪表板或可视化工具为特定输入获取所需详细信息。

You can make use of filters and search queries to get the required details for a particular input from a dashboard or visualization tool.

Plugins

您可以添加第三方插件,以在 Kibana 中添加一些新的可视化效果或其他 UI 添加项。

You can add third party plugins to add some new visualization or also other UI addition in Kibana.

Coordinate and Region Maps

Kibana 中的坐标和区域地图有助于在地理地图上显示可视化效果,从而提供数据的实际视图。

A coordinate and region map in Kibana helps to show the visualization on the geographical map giving a realistic view of the data.

Timelion

Timelion,也称为 timeline ,是另一种主要用于基于时间数据分析的可视化工具。要使用时间轴,我们需要使用简单的表达式语言,它有助于我们连接到索引,还对数据执行计算以获得所需结果。它有助于将数据与前一周、前一月等周期进行比较。

Timelion, also called as timeline is yet another visualization tool which is mainly used for time based data analysis. To work with timeline, we need to use simple expression language which helps us connect to the index and also perform calculations on the data to obtain the results we need. It helps more in comparison of data to the previous cycle in terms of week , month etc.

Canvas

画布是 Kibana 的另一强大功能。使用画布可视化,您可以用不同的颜色组合、形状、文本和多页(通常称为工作台)来表示数据。

Canvas is yet another powerful feature in Kibana. Using canvas visualization, you can represent your data in different colour combinations, shapes, texts, multiple pages basically called as workpad.

Advantages of Kibana

Kibana 为用户提供以下优势 -

Kibana offers the following advantages to its users −

  1. Contains open source browser based visualization tool mainly used to analyse large volume of logs in the form of line graph, bar graph, pie charts, heat maps etc.

  2. Simple and easy for beginners to understand.

  3. Ease of conversion of visualization and dashboard into reports.

  4. Canvas visualization help to analyse complex data in an easy way.

  5. Timelion visualization in Kibana helps to compare data backwards to understand the performance better.

Disadvantages of Kibana

  1. Adding of plugins to Kibana can be very tedious if there is version mismatch.

  2. You tend to face issues when you want to upgrade from older version to a new one.

Kibana - Environment Setup

要开始使用 Kibana,我们需要安装 Logstash、Elasticsearch 和 Kibana。在本章节中,我们将尝试了解 ELK 堆栈的安装。

To start working with Kibana we need to install Logstash, Elasticsearch and Kibana. In this chapter, we will try to understand the installation of the ELK stack here.

我们将在此讨论以下安装:

We would discuss the following installations here −

  1. Elasticsearch Installation

  2. Logstash Installation

  3. Kibana Installation

Elasticsearch Installation

Elasticsearch 有详细的文档在我们的库中。您可以在此处查看 elasticsearch installation 。您将必须按照教程中提到的步骤安装 Elasticsearch。

A detailed documentation on Elasticsearch exists in our library. You can check here for elasticsearch installation. You will have to follow the steps mentioned in the tutorial to install Elasticsearch.

安装完成后,按如下方式启动 elasticsearch 服务器:

Once done with the installation, start the elasticsearch server as follows −

Step 1

For Windows

For Windows

> cd kibanaproject/elasticsearch-6.5.4/elasticsearch-6.5.4/bin
> elasticsearch

请注意,对于 windows 用户,需要将 JAVA_HOME 变量设置为 java jdk 路径。

Please note for windows user, the JAVA_HOME variable has to be set to the java jdk path.

For Linux

For Linux

$ cd kibanaproject/elasticsearch-6.5.4/elasticsearch-6.5.4/bin
$ elasticsearch
elasticsearch installation

elasticsearch 的默认端口是 9200。完成后,您可以在 localhost http://localhost:9200/as 上的端口 9200 处检查 elasticsearch,如下所示:

The default port for elasticsearch is 9200. Once done, you can check the elasticsearch at port 9200 on localhost http://localhost:9200/as shown below −

elasticsearch default port
elasticsearch localhost

Logstash Installation

要安装 Logstash,请遵循 elasticsearch installation ,该连接已存在于我们的库中。

For Logstash installation, follow this elasticsearch installation which is already existing in our library.

Kibana Installation

访问官方 Kibana 网站 − https://www.elastic.co/products/kibana

Go to the official Kibana site −https://www.elastic.co/products/kibana

kibana installation

单击右上角的下载链接,它将显示如下屏幕:

Click the downloads link on the top right corner and it will display screen as follows −

kibana downloads

点按下载 Kibana 按钮。请注意,为了使用 Kibana,我们需要 64 位机器,它不能在 32 位机器上运行。

Click the Download button for Kibana. Please note to work with Kibana we need 64 bit machine and it will not work with 32 bit.

kibana button

在本教程中,我们将使用 Kibana 6 版本。Windows、Mac 和 Linux 都提供下载选项。您可以根据自己的选择进行下载。

In this tutorial, we are going to use Kibana version 6. The download option is available for Windows, Mac and Linux. You can download as per your choice.

创建一个文件夹并解压缩 kibana 的 tar/zip 下载文件。我们将使用弹性搜索中上传的样本数据。所以,目前让我们看看如何启动弹性搜索和 kibana。为此,请转到 kibana 解压所在的文件夹。

Create a folder and unpack the tar/zip downloads for kibana. We are going to work with sample data uploaded in elasticsearch. Thus, for now let us see how to start elasticsearch and kibana. For this, go to the folder where Kibana is unpacked.

For Windows

For Windows

> cd kibanaproject/kibana-6.5.4/kibana-6.5.4/bin
> kibana

For Linux

For Linux

$ cd kibanaproject/kibana-6.5.4/kibana-6.5.4/bin
$ kibana

Kibana 启动后,用户可以看到以下屏幕 −

Once Kibana starts, the user can see the following screen −

kibana starts

一旦在控制台中看到准备信号,您就可以使用 http://localhost:5601/ 在浏览器中打开 Kibana。Kibana 可用的默认端口为 5601。

Once you see the ready signal in the console, you can open Kibana in browser using http://localhost:5601/.The default port on which kibana is available is 5601.

Kibana 的用户界面如下图所示 −

The user interface of Kibana is as shown here −

kibana interface

在下一章中,我们将学习如何使用 Kibana 的 UI。要了解 Kibana UI 上的 Kibana 版本,请转到左侧的“管理”选项卡,它将显示我们当前使用的 Kibana 版本。

In our next chapter, we will learn how to use the UI of Kibana. To know the Kibana version on Kibana UI, go to Management Tab on left side and it will display you the Kibana version we are using currently.

kibana ui

Kibana - Introduction To Elk Stack

Kibana 是一个开源可视化工具,主要用于分析大量日志,形式为折线图、条形图、饼图、热图等。Kibana 与 Elasticsearch 和 Logstash 同步工作,共同形成所谓的 ELK 堆栈。

Kibana is an open source visualization tool mainly used to analyze a large volume of logs in the form of line graph, bar graph, pie charts, heatmaps etc. Kibana works in sync with Elasticsearch and Logstash which together forms the so called ELK stack.

ELK 代表 Elasticsearch、Logstash 和 Kibana。 ELK 是全球用于日志分析的最流行的日志管理平台之一。

ELK stands for Elasticsearch, Logstash, and Kibana. ELK is one of the popular log management platform used worldwide for log analysis.

在 ELK 堆栈中 −

In the ELK stack −

  1. Logstash extracts the logging data or other events from different input sources. It processes the events and later stores it in Elasticsearch.

  2. Kibana is a visualization tool, which accesses the logs from Elasticsearch and is able to display to the user in the form of line graph, bar graph, pie charts etc.

在本教程中,我们将紧密配合 Kibana 和 Elasticsearch,并以不同的形式可视化数据。

In this tutorial, we will work closely with Kibana and Elasticsearch and visualize the data in different forms.

在本章中,让我们了解如何使用 ELK 堆栈。此外,您还将看到如何 −

In this chapter, let us understand how to work with ELK stack together. Besides, you will also see how to −

  1. Load CSV data from Logstash to Elasticsearch.

  2. Use indices from Elasticsearch in Kibana.

Load CSV data from Logstash to Elasticsearch

我们将使用 CSV 数据来上传使用 Logstash 到 Elasticsearch 的数据。要对数据分析开展工作,我们可以从 kaggle.com 网站获取数据。Kaggle.com 网站已上传所有类型的数据,用户可以使用这些数据来开展数据分析。

We are going to use CSV data to upload data using Logstash to Elasticsearch. To work on data analysis, we can get data from kaggle.com website. Kaggle.com site has all types of data uploaded and users can use it to work on data analysis.

我们从这里获取了 countries.csv 数据: https://www.kaggle.com/fernandol/countries-of-the-world 。您可以下载 csv 文件并使用它。

We have taken the countries.csv data from here: https://www.kaggle.com/fernandol/countries-of-the-world. You can download the csv file and use it.

我们将使用 csv 文件包含以下详细信息。

The csv file which we are going to use has following details.

文件名 − countriesdata.csv

File name − countriesdata.csv

列 − “国家/地区”、“区域”、“人口”、“面积”

Columns − "Country","Region","Population","Area"

您还可以创建一个虚拟的 csv 文件并使用它。我们将使用 Logstash 将此数据从 countriesdata.csv 转储到 Elasticsearch。

You can also create a dummy csv file and use it. We will be using logstash to dump this data from countriesdata.csv to elasticsearch.

在您的终端启动 elasticsearch 和 Kibana 并保持运行。我们必须创建 logstash 的配置文件,其中包含有关 CSV 文件列的详细信息,以及其他详细信息,如下面的 logstash-config 文件所示 −

Start the elasticsearch and Kibana in your terminal and keep it running. We have to create the config file for logstash which will have details about the columns of the CSV file and also other details as shown in the logstash-config file given below −

input {
   file {
      path => "C:/kibanaproject/countriesdata.csv"
      start_position => "beginning"
      sincedb_path => "NUL"
   }
}
filter {
   csv {
      separator => ","
      columns => ["Country","Region","Population","Area"]
   }
   mutate {convert => ["Population", "integer"]}
   mutate {convert => ["Area", "integer"]}
}
output {
   elasticsearch {
      hosts => ["localhost:9200"]
      => "countriesdata-%{+dd.MM.YYYY}"
   }
   stdout {codec => json_lines }
}

在配置文件中,我们创建了 3 个组件 −

In the config file, we have created 3 components −

Input

我们需要指定输入文件的路径,在我们的例子中是一个 csv 文件。csv 文件存储的路径被赋予 path 字段。

We need to specify the path of the input file which in our case is a csv file. The path where the csv file is stored is given to the path field.

Filter

将使用分隔符 csv 组件,在我们的例子中是逗号,以及 csv 文件中可用的列。由于 logstash 将所有传入的数据都视为字符串,如果我们想要将任何列用作整数,则必须使用 mutate 指定相同的浮点数,如上所示。

Will have the csv component with separator used which in our case is comma, and also the columns available for our csv file. As logstash considers all the data coming in as string , in-case we want any column to be used as integer , float the same has to be specified using mutate as shown above.

Output

对于输出,我们需要指定放置数据的位置。在这里,我们使用的案例是 elasticsearch。需要提供给 elasticsearch 的数据是它正在运行的主机,我们已将它指定为 localhost。下一个字段是索引,我们已将名称指定为 countries-currentdate。一旦将数据更新到 Elasticsearch 中,我们必须在 Kibana 中使用相同的索引。

For output, we need to specify where we need to put the data. Here, in our case we are using elasticsearch. The data required to be given to the elasticsearch is the hosts where it is running, we have mentioned it as localhost. The next field in is index which we have given the name as countries-currentdate. We have to use the same index in Kibana once the data is updated in Elasticsearch.

将上述配置文件保存为 logstash_countries.config。请注意,我们需要在下一步的 logstash 命令中提供此配置的路径。

Save the above config file as logstash_countries.config. Note that we need to give the path of this config to logstash command in the next step.

要将数据从 csv 文件加载到 elasticsearch,我们需要启动 elasticsearch 服务器 −

To load the data from the csv file to elasticsearch, we need to start the elasticsearch server −

start elasticsearch server

现在,在浏览器中运行 http://localhost:9200 以确认 elasticsearch 是否运行成功。

Now, run http://localhost:9200 in the browser to confirm if elasticsearch is running successfully.

elasticsearch running

我们有正在运行的 elasticsearch。现在,转到 logstash 已安装的路径,并运行以下命令将数据上传到 elasticsearch。

We have elasticsearch running. Now go to the path where logstash is installed and run following command to upload the data to elasticsearch.

> logstash -f logstash_countries.conf
elasticsearch command prompt
elasticsearch upload data

以上屏幕显示从 CSV 文件加载数据到 Elasticsearch。要了解我们在 Elasticsearch 中创建了索引,我们可以按以下方式检查 −

The above screen shows data loading from the CSV file to Elasticsearch. To know if we have the index created in Elasticsearch we can check same as follows −

我们可以看到创建了如上所示的 countriesdata-28.12.2018 索引。

We can see the countriesdata-28.12.2018 index created as shown above.

countriesdata index

索引 − countries-28.12.2018 的详细信息如下 −

The details of the index − countries-28.12.2018 is as follows −

countriesdata detail index

请注意,当数据从 logstash 上传到 elasticsearch 时,会创建具有属性的映射详细信息。

Note that the mapping details with properties are created when data is uploaded from logstash to elasticsearch.

Use Data from Elasticsearch in Kibana

现在,我们有在 localhost 上运行的 Kibana,端口 5601 − http://localhost:5601 。Kibana 的 UI 显示如下 −

Currently, we have Kibana running on localhost, port 5601 − http://localhost:5601. The UI of Kibana is shown here −

kibana running

请注意,我们已经将 Kibana 连接到 Elasticsearch,我们应该能够在 Kibana 内看到 index :countries-28.12.2018

Note that we already have Kibana connected to Elasticsearch and we should be able to see index :countries-28.12.2018 inside Kibana.

在 Kibana UI 中,单击左侧的 Management Menu 选项 −

In the Kibana UI, click on Management Menu option on left side −

management menu

现在,单击 Index Management −

Now, click Index Management −

index management

Elasticsearch 中存在的索引显示在索引管理中。我们将在 Kibana 中使用的索引是 countriesdata-28.12.2018。

The indices present in Elasticsearch are displayed in index management. The index we are going to use in Kibana is countriesdata-28.12.2018.

因此,因为我们已经在 Kibana 中有了 elasticsearch 索引,所以下一步将了解如何在 Kibana 中使用索引以饼状图、条形图、折线图等形式可视化数据。

Thus, as we already have the elasticsearch index in Kibana, next will understand how to use the index in Kibana to visualize data in the form of pie chart, bar graph, line chart etc.

Kibana - Loading Sample Data

我们已经了解如何将数据从 logstash 上传到 elasticsearch。我们将在这里使用 logstash 和 elasticsearch 上传数据。但是对于我们需要的带有日期、经度和纬度字段的数据,我们将在后面的章节中了解。如果我们没有 CSV 文件,我们还将看到如何在 Kibana 中直接上传数据。

We have seen how to upload data from logstash to elasticsearch. We will upload data using logstash and elasticsearch here. But about the data that has date, longitude and latitudefields which we need to use, we will learn in the upcoming chapters. We will also see how to upload data directly in Kibana, if we do not have a CSV file.

在本章中,我们将介绍以下主题 −

In this chapter, we will cover following topics −

  1. Using Logstash upload data having date, longitude and latitude fields in Elasticsearch

  2. Using Dev tools to upload bulk data

Using Logstash upload for data having fields in Elasticsearch

我们将使用 CSV 格式的数据,并且是从 Kaggle.com 获取的数据,其中包含可以用于分析的数据。

We are going to use data in the form of CSV format and the same is taken from Kaggle.com which deals with data that you can use for an analysis.

此处使用的家庭医疗服务数据是从网站 Kaggle.com 获取的。

The data home medical visits to be used here is picked up from site Kaggle.com.

以下是 CSV 文件中可用的字段 −

The following are the fields available for the CSV file −

["Visit_Status","Time_Delay","City","City_id","Patient_Age","Zipcode","Latitude","Longitude",
"Pathology","Visiting_Date","Id_type","Id_personal","Number_Home_Visits","Is_Patient_Minor","Geo_point"]

Home_visits.csv 如下所示 −

The Home_visits.csv is as follows −

home visits

以下是要与 logstash 一起使用的配置文件 −

The following is the conf file to be used with logstash −

input {
   file {
      path => "C:/kibanaproject/home_visits.csv"
      start_position => "beginning"
      sincedb_path => "NUL"
   }
}
filter {
   csv {
      separator => ","
      columns =>
      ["Visit_Status","Time_Delay","City","City_id","Patient_Age",
      "Zipcode","Latitude","Longitude","Pathology","Visiting_Date",
      "Id_type","Id_personal","Number_Home_Visits","Is_Patient_Minor","Geo_point"]
   }
   date {
      match => ["Visiting_Date","dd-MM-YYYY HH:mm"]
      target => "Visiting_Date"
   }
   mutate {convert => ["Number_Home_Visits", "integer"]}
   mutate {convert => ["City_id", "integer"]}
   mutate {convert => ["Id_personal", "integer"]}
   mutate {convert => ["Id_type", "integer"]}
   mutate {convert => ["Zipcode", "integer"]}
   mutate {convert => ["Patient_Age", "integer"]}
   mutate {
      convert => { "Longitude" => "float" }
      convert => { "Latitude" => "float" }
   }
   mutate {
      rename => {
         "Longitude" => "[location][lon]"
         "Latitude" => "[location][lat]"
      }
   }
}
output {
   elasticsearch {
      hosts => ["localhost:9200"]
      index => "medicalvisits-%{+dd.MM.YYYY}"
   }
   stdout {codec => json_lines }
}

默认情况下,logstash 将要在 elasticsearch 中上传的所有内容都视为字符串。在您的 CSV 文件中,如果有日期字段,您需要执行以下操作以获取日期格式。

By default, logstash considers everything to be uploaded in elasticsearch as string. Incase your CSV file has date field you need to do following to get the date format.

For date field −

For date field −

date {
   match => ["Visiting_Date","dd-MM-YYYY HH:mm"]
   target => "Visiting_Date"
}

对于地理位置,elasticsearch 的理解如下 −

In-case of geo location, elasticsearch understands the same as −

"location": {
   "lat":41.565505000000044,
   "lon": 2.2349995750000695
}

因此,我们需要确保经度和纬度采用 elasticsearch 所需的格式。因此,我们首先需要将经度和纬度转换为浮点数,然后再重命名它,使其作为 location JSON 对象的一部分,其中包含 latlon 。这里显示了代码 −

So we need to make sure we have Longitude and Latitude in the format elasticsearch needs it. So first we need to convert longitude and latitude to float and later rename it so that it is available as part of location json object with lat and lon. The code for the same is shown here −

mutate {
      convert => { "Longitude" => "float" }
      convert => { "Latitude" => "float" }
   }
mutate {
   rename => {
      "Longitude" => "[location][lon]"
      "Latitude" => "[location][lat]"
   }
}

如要将字段转换成整数,请使用以下代码 −

For converting fields to integers, use the following code −

mutate {convert => ["Number_Home_Visits", "integer"]}
mutate {convert => ["City_id", "integer"]}
mutate {convert => ["Id_personal", "integer"]}
mutate {convert => ["Id_type", "integer"]}
mutate {convert => ["Zipcode", "integer"]}
mutate {convert => ["Patient_Age", "integer"]}

处理完字段后,运行以下命令将数据上传到 elasticsearch −

Once the fields are taken care, run the following command to upload the data in elasticsearch −

  1. Go inside Logstash bin directory and run the following command.

logstash -f logstash_homevisists.conf
  1. Once done you should see the index mentioned in logstash conf file in elasticsearch as shown below −

logstash conf

现在,我们可以在上述上载的索引上创建索引模式,并进一步使用它来创建可视化效果。

We can now create index pattern on above index uploaded and use it further for creating visualization.

Using Dev Tools to Upload Bulk Data

我们将从 Kibana UI 使用 Dev Tools。Dev Tools 有助于将数据上载到 Elasticsearch,而无需使用 Logstash。我们可以发布、放置、删除、搜索使用 Dev Tools 在 Kibana 中想要的数据。

We are going to use Dev Tools from Kibana UI. Dev Tools is helpful to upload data in Elasticsearch, without using Logstash. We can post, put, delete, search the data we want in Kibana using Dev Tools.

在本节中,我们将尝试将示例数据加载到 Kibana 本身。我们可以使用它来练习示例数据并在 Kibana 功能中进行操作以很好地理解 Kibana。

In this section, we will try to load sample data in Kibana itself. We can use it to practice with the sample data and play around with Kibana features to get a good understanding of Kibana.

让我们从以下 URL 获取 json 数据并在 Kibana 中上载相同的数据。同样,您可以尝试将任何示例 json 数据加载到 Kibana 内。

Let us take the json data from the following url and upload the same in Kibana. Similarly, you can try any sample json data to be loaded inside Kibana.

在我们开始上载示例数据之前,我们需要让 json 数据带有在 Elasticsearch 中使用的索引。当我们使用 logstash 上载它时,logstash 会负责添加索引,用户不必关心 Elasticsearch 所需的索引。

Before we start to upload the sample data, we need to have the json data with indices to be used in elasticsearch. When we upload it using logstash, logstash takes care to add the indices and the user does not have to bother about the indices which are required by elasticsearch.

Normal Json Data

[
   {"type":"act","line_id":1,"play_name":"Henry IV",

   "speech_number":"","line_number":"","speaker":"","text_entry":"ACT I"},
   {"type":"scene","line_id":2,"play_name":"Henry IV",
   "speech_number":"","line_number":"","speaker":"","text_entry":"SCENE I.London. The palace."},
   {"type":"line","line_id":3,"play_name":"Henry IV",
   "speech_number":"","line_number":"","speaker":"","text_entry":
   "Enter KING HENRY, LORD JOHN OF LANCASTER, the
   EARL of WESTMORELAND, SIR WALTER BLUNT, and others"}
]

与 Kibana 一起使用的 json 代码必须带有索引,如下所示:

The json code to used with Kibana has to be with indexed as follows −

{"index":{"_index":"shakespeare","_id":0}}
{"type":"act","line_id":1,"play_name":"Henry IV",
"speech_number":"","line_number":"","speaker":"","text_entry":"ACT I"}
{"index":{"_index":"shakespeare","_id":1}}
{"type":"scene","line_id":2,"play_name":"Henry IV",
"speech_number":"","line_number":"","speaker":"",
"text_entry":"SCENE I. London. The palace."}
{"index":{"_index":"shakespeare","_id":2}}
{"type":"line","line_id":3,"play_name":"Henry IV",
"speech_number":"","line_number":"","speaker":"","text_entry":
"Enter KING HENRY, LORD JOHN OF LANCASTER, the EARL
of WESTMORELAND, SIR WALTER BLUNT, and others"}

请注意,json 文件中还有附加数据 − {"index":{"_index":"nameofindex","_id":key}}

Note that there is an additional data that goes in the jsonfile −{"index":{"_index":"nameofindex","_id":key}}.

为了让任何示例 json 文件与 Elasticsearch 兼容,我们这里有一个用 php 编写的小代码,它将输出 json 文件,该 json 文件给出 Elasticsearch 想要格式。

To convert any sample json file compatible with elasticsearch, here we have a small code in php which will output the json file given to the format which elasticsearch wants −

PHP Code

<?php
   $myfile = fopen("todo.json", "r") or die("Unable to open file!"); // your json
   file here
   $alldata = fread($myfile,filesize("todo.json"));
   fclose($myfile);
   $farray = json_decode($alldata);
   $afinalarray = [];
   $index_name = "todo";
   $i=0;
   $myfile1 = fopen("todonewfile.json", "w") or die("Unable to open file!"); //
   writes a new file to be used in kibana dev tool
   foreach ($farray as $a => $value) {
      $_index = json_decode('{"index": {"_index": "'.$index_name.'", "_id": "'.$i.'"}}');
      fwrite($myfile1, json_encode($_index));
      fwrite($myfile1, "\n");
      fwrite($myfile1, json_encode($value));
      fwrite($myfile1, "\n");
      $i++;
   }
?>

我们从 https://jsonplaceholder.typicode.com/todos 获取 todo json 文件,并使用 php 代码将其转换为我们需要在 Kibana 中上载的格式。

We have taken the todo json file from https://jsonplaceholder.typicode.com/todos and use php code to convert to the format we need to upload in Kibana.

要加载示例数据,请打开 dev tools 标签,如下所示:

To load the sample data, open the dev tools tab as shown below −

dev tools

现在,我们将使用如上所示的控制台。我们将获取通过 php 代码运行后得到的 json 数据。

We are now going to use the console as shown above. We will take the json data which we got after running it through php code.

在 dev tools 中用于上载 json 数据的命令为:

The command to be used in dev tools to upload the json data is −

POST _bulk

请注意,我们创建的索引名称为 todo。

Note that the name of the index we are creating is todo.

upload json data
dev tools index

单击绿色按钮后,数据将被上载,您可以按照以下步骤检查索引是否在 Elasticsearch 中创建:

Once you click the green button the data is uploaded, you can check if the index is created or not in elasticsearch as follows −

created index

您可以在 dev tools 本身中检查相同的内容,如下所示:

You can check the same in dev tools itself as follows −

Command −

Command −

GET /_cat/indices
dev tools command

如果您想在您的索引:todo 中搜索一些内容,您可以按照以下步骤操作:

If you want to search something in your index:todo , you can do that as shown below −

Command in dev tool

Command in dev tool

GET /todo/_search
dev tools search

显示的上述搜索结果如下 −

The output of the above search is as shown below −

dev tools output

它显示了 todoindex 中存在的所有记录。我们获得的总记录数是 200。

It gives all the records present in the todoindex. The total records we are getting is 200.

Search for a Record in todo Index

我们可以使用以下命令实现该操作 −

We can do that using the following command −

GET /todo/_search
{
   "query":{
      "match":{
         "title":"delectusautautem"
      }
   }
}
record todo index
fetch records

我们能够获取与我们给出的标题匹配的记录。

We are able to fetch the records which match with the title we have given.

Kibana - Management

在 Kibana 中,管理部分用于管理索引模板。在本节中,我们将讨论以下内容 −

The Management section in Kibana is used to manage the index patterns. In this chapter, we will discuss the following −

  1. Create Index Pattern without Time filter field

  2. Create Index Pattern with Time filter field

Create Index Pattern Without Time Filter field

为此,请转到 Kibana UI,然后单击管理 −

To do this, go to Kibana UI and click Management −

kibana ui management

要使用 Kibana,我们首先必须创建从 Elasticsearch 中填充内容的索引。你可以获取从 Elasticsearch → 索引管理提供的全部索引,如图所示 −

To work with Kibana, we first have to create index which is populated from elasticsearch. You can get all the indices available from Elasticsearch → Index Management as shown −

elasticsearch index management

目前 Elasticsearch 中有上述索引。文档计数告诉我们每个索引中可用的记录数。如果有任何索引已更新,文档计数将会不断变化。主存储将显示已上传的每个索引的大小。

At present elasticsearch has the above indices. The Docs count tells us the no of records available in each of the index. If there is any index which is updated, the docs count will keep changing. Primary storage tells the size of each index uploaded.

若要在 Kibana 中创建新索引,我们需要单击下面的索引模板 −

To create New index in Kibana, we need to click on Index Patterns as shown below −

index patterns

单击索引模板后,会显示以下屏幕 −

Once you click Index Patterns, we get the following screen −

index patterns screen

请注意,创建索引模板按钮用于创建新索引。回想一下,我们在本教程的开始就已经创建了 countriesdata-28.12.2018。

Note that the Create Index Pattern button is used to create a new index. Recall that we already have countriesdata-28.12.2018 created at the very start of the tutorial.

Create Index Pattern with Time filter field

单击创建索引模板来创建新索引。

Click on Create Index Pattern to create a new index.

time filter field

会显示 Elasticsearch 中的索引,选择一个以创建新索引。

The indices from elasticsearch are displayed, select one to create a new index.

create index pattern

现在,单击下一步。

Now, click Next step.

下一步是配置设置,你需要输入以下内容 −

The next step is to configure the setting, where you need to enter the following −

  1. Time filter field name is used to filter data based on time. The dropdown will display all time and date related fields from the index.

如下图所示,Visiting_Date 是日期字段。选择 Visiting_Date 作为时间过滤字段名称。

In the image shown below, we have Visiting_Date as a date field. Select Visiting_Date as the Time Filter field name.

time filter field name

单击 {} 按钮来创建索引。完成后,它将显示索引 medicalvisits-26.01.2019 中显示的所有字段,如下图所示 −

Click Create index pattern button to create the index. Once done it will display all the fields present in your index medicalvisits-26.01.2019 as shown below −

我们将在索引 medicalvisits-26.01.2019 中具有以下字段 −

We have following fields in the index medicalvisits-26.01.2019 −

["Visit_Status","Time_Delay","City","City_id","Patient_Age","Zipcode","Latitude
","Longitude","Pathology","Visiting_Date","Id_type","Id_personal","Number_Home_
Visits","Is_Patient_Minor","Geo_point"].

该索引包含家庭就诊的所有数据。从 logstash 插入时,Elasticsearch 还添加了一些附加字段。

The index has all the data for home medical visits. There are some additional fields added by elasticsearch when inserted from logstash.

medical visits
medical visits logstash
medical visits elasticsearch
medical visits additional fields

Kibana - Discover

本章讨论 Kibana UI 中的 Discover 选项卡。我们将详细了解以下概念 -

This chapter discusses the Discover Tab in Kibana UI. We will learn in detail about the following concepts −

  1. Index without date field

  2. Index with date field

Index without date field

在左侧菜单中选择 Discover,如下所示 -

Select Discover on the left side menu as shown below −

discover menu

在右侧,它显示我们上一章中创建的 countriesdata- 28.12.2018 索引中可用数据的详细信息。

On the right side, it displays the details of the data available in countriesdata- 28.12.2018 index we created in previous chapter.

在左上角,它显示可用的记录总数 -

On the top left corner, it shows the total number of records available −

top left corner

我们可以在此选项卡中获取 (countriesdata-28.12.2018) 索引内数据的详细信息。在上文显示的屏幕左上角,我们可以看到类似于新建、保存、打开、共享、检查和自动刷新的按钮。

We can get the details of the data inside the index (countriesdata-28.12.2018) in this tab. On the top left corner in screen shown above, we can see Buttons like New, Save, Open, Share ,Inspect and Auto-refresh.

如果你单击“自动刷新”,它会显示如下屏幕:

If you click Auto-refresh, it will display the screen as shown below −

auto refresh

你可以通过单击上面的“秒”、“分钟”或“小时”来设置自动刷新间隔。Kibana 将自动刷新屏幕,并在每次设置的间隔计时器后获取最新数据。

You can set the auto-refresh interval by clicking on the seconds, minutes or hour from above. Kibana will auto-refresh the screen and get fresh data after every interval timer you set.

来自 index:countriesdata-28.12.2018 的数据如下所示:

The data from index:countriesdata-28.12.2018 is displayed as shown below −

所有字段以及数据以行方式显示。单击箭头展开行,它将以表格式或 JSON 格式提供详细信息

All the fields along with the data are shown row wise. Click the arrow to expand the row and it will give you details in Table format or JSON format

table format
details table format

JSON Format

json format

左侧有一个按钮名为“查看单份文档”。

There is a button on the left side called View single document.

single document

如果您单击它,它将以类似于下面所示的方式显示行中显示的行或数据

If you click it, it will display the row or the data present in the row inside the page as shown below −

data present row
single document row

虽然我们在这里获得所有数据详细信息,但很难逐一浏览。

Though we are getting all the data details here, it is difficult to go through each of them.

现在让我们尝试获取表格格式的数据。以下显示了一种展开其中一行并单击每个字段中可用的切换列选项的方法:

Now let us try to get the data in tabular format. One way to expand one of the row and click the toggle column option available across each field is shown below −

单击每个可用的“数据表中切换列”选项,您将注意到数据以表格格式显示

Click on Toggle column in table option available for each and you will notice the data being shown in table format −

toggle column

在此处,我们选择了字段国家、地区、区域和人口。折叠展开的行,您现在应该看到所有数据为表格格式。

Here, we have selected fields Country, Area, Region and Population. Collapse the expanded row and you should see all the data in tabular format now.

selected fields

我们选择的字段显示在屏幕左侧,如下所示

The fields we selected are displayed on the left side of the screen as shown below −

selected fields displayed

请注意,有两个选项——已选择的字段和可用的字段。我们选择在表格格式中显示的字段是已选择字段的一部分。如果您想删除任何字段,可以通过单击将在所选字段选项中的字段名称中看到的“移除”按钮来执行此操作。

Observe that there are 2 options − Selected fields and Available fields. The fields we have selected to show in tabular format are a part of selected fields. In case you want to remove any field you can do so by clicking the remove button which will be seen across the field name in selected field option.

remove fields

删除后,该字段将在可用的字段内可用,您可以在其中通过单击将在您想要的字段中显示的“添加”按钮来将其添加回来。您还可以使用此方法通过从“可用字段”中选择所需字段来获取表格格式的数据。

Once removed, the field will be available inside the Available fields where you can add back by clicking the add button which will be shown across the field you want. You can also use this method to get your data in tabular format by choosing the required fields from Available fields.

我们在 Discover 中有一个搜索选项,我们可以使用它来搜索索引内的数据。让我们在此处尝试与搜索选项相关的示例

We have a search option available in Discover, which we can use to search for data inside the index. Let us try examples related to search option here −

假设您想要搜索国家印度,您可以执行以下操作:

Suppose you want to search for country India, you can do as follows −

search fields

您可以输入您的搜索详细信息并单击“更新”按钮。如果您想要搜索以 Aus 开头的国家,您可以执行以下操作:

You can type your search details and click the Update button. If you want to search for countries starting with Aus, you can do so as follows −

update fields

单击“更新”以查看结果

Click Update to see the results

update results

在这里,有两个以 Aus* 开头的国家。搜索字段有一个“选项”按钮,如上所示。当用户单击它时,它会显示一个切换按钮,当开启时有助于编写搜索查询。

Here, we have two countries starting with Aus*. The search field has a Options button as shown above. When a user clicks it, it displays a toggle button which when ON helps in writing the search query.

search query

打开查询功能并在搜索中输入字段名称,它将显示该字段可用的选项。

Turn on query features and type the field name in search, it will display the options available for that field.

例如,国家字段是一个字符串,它会显示针对字符串字段的以下选项:

For example, Country field is a string and it displays following options for the string field −

string field

类似地,“区域”是一个数字字段,它会显示针对数字字段的以下选项:

Similarly, Area is a Number field and it displays following options for Number field −

number field

您可以在 Discover 字段中尝试不同的组合并根据您的选择筛选数据。可以使用“保存”按钮保存“发现”标签中的数据,以便将来使用。

You can try out different combination and filter the data as per your choice in Discover field. The data inside the Discover tab can be saved using the Save button, so that you can use it for future purpose.

要保存 Discover 中的数据,请单击右上角的保存按钮,如下所示

To save the data inside discover click on the save button on top right corner as shown below −

save search

给你的搜索命名并单击确认保存保存即可。保存完成后,下次访问 Discover 选项卡时,你可以单击右上角的打开按钮获取已保存的标题,如下所示

Give title to your search and click Confirm Save to save it. Once saved, next time you visit the Discover tab, you can click the Open button on the top right corner to get the saved titles as shown below −

open search

你还可以使用右上角的共享按钮与他人共享数据。单击它,你就可以找到共享选项,如下所示

You can also share the data with others using the Share button available on top right corner. If you click it, you can find sharing options as shown below −

share search

你可以使用 CSV 报告或永久链接的形式共享它。

You can share it using CSV Reports or in the form of Permalinks.

CSV 报告中可以选择的选项有

The option available onclick on CSV Reports are −

csv reports

单击生成 CSV 获取报告以与他人共享。

Click Generate CSV to get the report to be shared with others.

单机永久链接可用的选项如下所示

The option available onclick of Permalinks are as follows −

onclick permalinks

快照选项将提供一个 Kibana 链接,该链接将显示当前搜索中可用的数据。

The Snapshot option will give a Kibana link which will display data available in the search currently.

已保存的对象选项将提供一个 Kibana 链接,该链接将显示搜索中可用的最新数据。

The Saved object option will give a Kibana link which will display the recent data available in your search.

快照 - http://localhost:5601/goto/309a983483fccd423950cfb708fabfa5 已保存的对象 :http://localhost:5601/app/kibana#/discover/40bd89d0-10b1-11e9-9876-4f3d759b471e?_g=()

Snapshot − http://localhost:5601/goto/309a983483fccd423950cfb708fabfa5 Saved Object :http://localhost:5601/app/kibana#/discover/40bd89d0-10b1-11e9-9876-4f3d759b471e?_g=()

你可以使用 Discover 选项卡和可用的搜索选项,并且可以保存并与他人共享获得的结果。

You can work with Discover tab and search options available and the result obtained can be saved and shared with others.

Index with Date Field

转到 Discover 选项卡并选择索引:*medicalvisits-26.01.2019*

Go to Discover tab and select index:*medicalvisits-26.01.2019*

discover tab select index

它显示了消息 - “没有结果与你的搜索条件相符”,为我们选择的索引在过去 15 分钟内。该索引包含了 2015 年、2016 年、2017 年和 2018 年的数据。

It has displayed the message − “No results match your search criteria”, for the last 15 minutes on the index we have selected. The index has data for years 2015,2016,2017 and 2018.

更改时间范围,如下所示

Change the time range as shown below −

change time range

单击绝对选项卡。

Click Absolute tab.

absolute tab

选择自 - 1 月 1 日 2017 年和至 - 12 月 31 日 2017 年,因为我们将分析 2017 年的数据。

Select the date From − 1st Jan 2017 and To − 31st Dec2017 as we will analyze data for year 2017.

select date

单击按钮添加时间范围。它将向你显示数据和条形图,如下所示

Click the Go button to add the timerange. It will display you the data and bar chart as follows −

add timerange

这是 2017 年的月度数据

This is the monthly data for the year 2017 −

monthly data

由于我们还存储了时间加上日期,我们也可以按小时和分钟过滤数据。

Since we also have the time stored along with date, we can filter the data on hours and minutes too.

filter data

上述数字显示了 2017 年的小时数据。

The figure shown above displays the hourly data for the year 2017.

这里从索引 medicalvisits-26.01.2019 显示字段

Here the fields displayed from the index − medicalvisits-26.01.2019

hourly data

我们有如下所示的可用字段,显示在左侧:

We have the available fields on left side as shown below −

hourly data available fields

您可以从可用字段中选择字段,并将数据转换为如下所示的制表符分隔格式。这里我们选择如下字段:

You can select the fields from available fields and convert the data into tabular format as shown below. Here we have selected the following fields −

tabular format

以上字段的表格数据如下所示:

The tabular data for above fields is shown here −

tabular datas

Kibana - Aggregation And Metrics

在学习 Kibana 期间经常遇到的两个术语是存储段和指标聚合。本章将讨论它们在 Kibana 中所扮演的角色以及有关它们的更多详细信息。

The two terms that you come across frequently during your learning of Kibana are Bucket and Metrics Aggregation. This chapter discusses what role they play in Kibana and more details about them.

What is Kibana Aggregation?

聚合指的是某个特定搜索查询或过滤器获得的文档集合或文档集。聚合构成在 Kibana 中构建所需的视化的主要概念。

Aggregation refers to the collection of documents or a set of documents obtained from a particular search query or filter. Aggregation forms the main concept to build the desired visualization in Kibana.

每当执行任何可视化时,你需要确定标准,这意味着你要以什么方式对数据进行分组以对其执行度量。

Whenever you perform any visualization, you need to decide the criteria, which means in which way you want to group the data to perform the metric on it.

在这一部分中,我们将讨论两种类型的聚合 −

In this section, we will discuss two types of Aggregation −

  1. Bucket Aggregation

  2. Metric Aggregation

Bucket Aggregation

存储段主要包含一个键和一个文档。执行聚合时,会将文档放置在相应的存储段中。因此,最终你应该看到存储段列表,每个存储段都包含文档列表。在 Kibana 中创建可视化时你会看到的存储段聚合列表如下所示 −

A bucket mainly consists of a key and a document. When the aggregation is executed, the documents are placed in the respective bucket. So at the end you should have a list of buckets, each with a list of documents. The list of Bucket Aggregation you will see while creating visualization in Kibana is shown below −

bucket aggregation

存储段聚合具有以下列表 −

Bucket Aggregation has the following list −

  1. Date Histogram

  2. Date Range

  3. Filters

  4. Histogram

  5. IPv4 Range

  6. Range

  7. Significant Terms

  8. Terms

在创建时,你需要为存储段聚合确定其中之一,即对存储段中的文档进行分组。

While creating, you need to decide one of them for Bucket Aggregation i.e. to group the documents inside the buckets.

例如,对于分析,考虑我们在本教程开头上传的国家数据。countries 索引中可用的字段有国家名称、面积、人口、区域。在国家数据中,我们有国家名称及其人口、区域和面积。

As an example, for analysis, consider the countries data that we have uploaded at the start of this tutorial. The fields available in the countries index is country name, area, population, region. In the countries data, we have name of the country along with its population, region and the area.

让我们假设我们要按区域划分数据。然后,每个区域中的国家将成为我们的搜索查询,因此在这种情况下,区域将形成我们的存储段。下面的框图显示 R1、R2、R3、R4、R5 和 R6 是我们获得的存储段,而 c1、c2 …​c25 是属于存储段 R1 至 R6 的文档列表。

Let us assume that we want region wise data. Then, the countries available in each region becomes our search query, so in this case the region will form our buckets. The block diagram below shows that R1, R2,R3,R4,R5 and R6 are the buckets which we got and c1 , c2 ..c25 are the list of documents which are part of the buckets R1 to R6.

block diagram aggregation

我们可以看到每个存储段中有一些圆圈。它们是基于搜索标准的文档集,并被视为属于各个存储段的一部分。在存储段 R1 中,我们有文档 c1、c8 和 c15。这些文档是属于该区域的国家,对于其他的存储段而言也是如此。因此,如果我们计算存储段 R1 中的国家数量,则是 3,R2 为 6,R3 为 6,R4 为 2,R5 为 5,R6 为 4。

We can see that there are some circles in each of the bucket. They are set of documents based on the search criteria and considered to be falling in each of the bucket. In the bucket R1, we have documents c1, c8 and c15. These documents are the countries that falling in that region, same for others. So if we count the countries in Bucket R1 it is 3, 6 for R2, 6 for R3, 2 for R4, 5 for R5 and 4 for R6.

因此,通过存储段聚合,我们可以将文档聚合到存储段中,并像上面显示的那样获得该存储段中的文档列表。

So through bucket aggregation, we can aggregate the document in buckets and have a list of documents in those buckets as shown above.

到目前为止,我们所具有的存储段聚合列表有 −

The list of Bucket Aggregation we have so far is −

  1. Date Histogram

  2. Date Range

  3. Filters

  4. Histogram

  5. IPv4 Range

  6. Range

  7. Significant Terms

  8. Terms

现在让我们详细讨论如何逐个形成这些存储段。

Let us now discuss how to form these buckets one by one in detail.

Date Histogram

日期直方图聚合用于日期字段。因此,如果你要用于可视化的索引在该索引中具有日期字段,则只能使用这种聚合类型。这是一个多存储段聚合,这意味着你有一些文档可以作为多个存储段的一部分。需要针对这种聚合使用一个间隔,具体信息如下 −

Date Histogram aggregation is used on a date field. So the index that you use to visualize, if you have date field in that index than only this aggregation type can be used. This is a multi-bucket aggregation which means you can have some of the documents as a part of more than 1 bucket. There is an interval to be used for this aggregation and the details are as shown below −

date histogram

将“Bucket聚合”选择为“日期直方图”时,它将显示“字段”选项,其中仅提供与日期相关的字段。选择字段后,你需要选择具有以下详细信息的“时间间隔”−

When you Select Buckets Aggregation as Date Histogram, it will display the Field option which will give only the date related fields. Once you select your field, you need to select the Interval which has the following details −

select interval histogram

因此,根据所选索引、字段和时间间隔中的文档,将对文档进行分类。例如,如果你选择每月时间间隔,则会将基于日期的文档转换为多个子段,根据月份(即 1 月至 12 月),文档将被放入子段中。在这里,1 月、2 月……12 月将是子段。

So the documents from the index chosen and based on the field and interval chosen will categorize the documents in buckets. For example, if you chose the interval as monthly, the documents based on date will be converted into buckets and based on the month i.e, Jan-Dec the documents will be put in the buckets. Here Jan,Feb,..Dec will be the buckets.

Date Range

你需要一个日期字段才能使用此聚合类型。在这里,我们会有一个日期范围,即从日期到日期。子段将根据给定的日期范围包含文档。

You need a date field to use this aggregation type. Here we will have a date range, that is from date and to date are to be given. The buckets will have its documents based on the form and to date given.

date range

Filters

使用“过滤器”类型聚合,将根据过滤器形成子段。在这里,你会获得一个多子段,根据过滤器条件,一个文档可以存在一个或多个子段中。

With Filters type aggregation, the buckets will be formed based on the filter. Here you will get a multi-bucket formed as based on the filter criteria one document can exists in one or more buckets.

使用过滤器,用户可以在过滤器选项中编写查询,如下所示 − 。

Using filters, users can write their queries in the filter option as shown below −

filters

你可以通过使用“添加过滤器”按钮添加多个你选择的过滤器。

You can add multiple filters of your choice by using Add Filter button.

Histogram

此类型的聚合应用于数字字段,它会根据应用的时间间隔将文档分组到一个子段中。例如,0-50、50-100、100-150 等。

This type of aggregation is applied on a number field and it will group the documents in a bucket based on the interval applied. For example, 0-50,50-100,100-150 etc.

histogram

IPv4 Range

此类型的聚合被用于主要是用于 IP 地址。

This type of aggregation is used and mainly used for IP addresses.

ipv4 range

我们拥有的索引,即 contriesdata-28.12.2018 没有类型为 IP 的字段,所以它会显示如上所示的消息。如果你碰巧有 IP 字段,你可以像上所示那样指定其中的“自”和“至”值。

The index that we have that is the contriesdata-28.12.2018 does not have field of type IP so it displays a message as shown above. If you happen to have the IP field, you can specify the From and To values in it as shown above.

Range

此类型的聚合需要字段类型为数字。你需要指定范围,文档将被列在属于此范围的子段中。

This type of Aggregation needs fields to be of type number. You need to specify the range and the documents will be listed in the buckets falling in the range.

如果需要,你可以通过单击“添加范围”按钮添加更多范围。

You can add more range if required by clicking on the Add Range button.

Significant Terms

此类型的聚合主要用于字符串字段。

This type of aggregation is mostly used on the string fields.

significant terms

Terms

此类型的聚合用于所有可用的字段,例如数字、字符串、日期、布尔值、IP 地址、时间戳等。请注意,这是我们将在本教程中处理的所有可视化中将要使用的聚合。

This type of aggregation is used on all the available fields namely number, string, date, boolean, IP address, timestamp etc. Note that this is the aggregation we are going to use in all our visualization that we are going to work on in this tutorial.

terms

我们有一个“排序依据”选项,我们可以根据我们选择的指标对数据进行分组。大小是指你希望在可视化中显示的子段数。

We have an option order by which we will group the data based on the metric we select. The size refers to the number of buckets you want to display in the visualization.

接下来,我们来谈谈指标聚合。

Next, let us talk about Metric Aggregation.

Metric Aggregation

指标聚合主要指的是对子段中存在的文档进行的数学计算。例如,如果你选择一个数字字段,你可以针对此字段进行的指标计算包括计数、求和、最小值、最大值和平均值等。

Metric Aggregation mainly refers to the maths calculation done on the documents present in the bucket. For example if you choose a number field the metric calculation you can do on it is COUNT, SUM, MIN, MAX, AVERAGE etc.

这里给出了我们将讨论的指标聚合的列表 −

A list of metric aggregation that we shall discuss is given here −

metric aggregation

在本节中,让我们讨论我们将经常用到的重要指标 −

In this section, let us discuss the important ones which we are going to use often −

  1. Average

  2. Count

  3. Max

  4. Min

  5. Sum

该指标将应用于我们已经在上面讨论过的各个子段聚合中。

The metric will be applied on the individual bucket aggregation that we have already discussed above.

接下来,我们在此讨论指标聚合列表 −

Next, let us discuss the list of metrics aggregation here −

Average

这将给出桶中存在的文档值的平均值。例如 −

This will give the average for the values of the documents present in the buckets. For example −

average

R1 到 R6 是桶。在 R1 中,我们有 c1、c8 和 c15。假定 c1 的值为 300,c8 的值为 500,c15 的值为 700。现在,要获得 R1 桶的平均值

R1 to R6 are the buckets. In R1 we have c1,c8 and c15. Consider the value of c1 is 300, c8 is500 and c15 is 700. Now to get the average value of R1 bucket

R1 = c1 值 + c8 值 + c15 值 / 3 = 300 + 500 + 700 / 3 = 500。

R1 = value of c1 + value of c8 + value of c15 / 3 = 300 + 500 + 700 / 3 = 500.

R1 桶的平均值为 500。在此,文档的值可能是任何值,例如,如果你考虑的是国家数据,则可能是该区域的国家面积。

The average is 500 for bucket R1. Here the value of the document could be anything like if you consider the countries data it could be the area of the country in that region.

Count

这将给出桶中存在的文档数。假设你想获取区域中存在的国家数量,则它将是桶中存在的总文件数。例如,R1 将为 3、R2 = 6、R3 = 5、R4 = 2、R5 = 5 和 R6 = 4。

This will give the count of documents present in the Bucket. Suppose you want the count of the countries present in the region, it will be the total documents present in the buckets. For example, R1 it will be 3, R2 = 6, R3 = 5, R4 = 2, R5 = 5 and R6 = 4.

Max

这将给出桶中存在的文档的最大值。考虑上面的示例,如果我们有区域桶中按国家/地区划分的国家数据。每个区域的最大值都将是面积最大的国家。因此,它将从每个区域(即 R1 到 R6)中选择一个国家。

This will give the max value of the document present in the bucket. Considering the above example if we have area wise countries data in the region bucket. The max for each region will be the country with the max area. So it will have one country from each region i.e. R1 to R6.

in

这将给出桶中存在的文档的最小值。考虑上面的示例,如果我们有区域桶中按国家/地区划分的国家数据。每个区域的最小值都将是面积最小的国家。因此,它将从每个区域(即 R1 到 R6)中选择一个国家。

This will give the min value of the document present in the bucket. Considering above example if we have area wise countries data in the region bucket. The min for each region will be the country with the minimum area. So it will have one country from each region i.e. R1 to R6.

Sum

这将给出桶中存在的文档值的总和。例如,如果你考虑上面的示例,如果我们要获取区域中的总国家/地区面积,这将是区域中存在的文件的总和。

This will give the sum of the values of the document present in the bucket. For example if you consider the above example if we want the total area or countries in the region, it will be sum of the documents present in the region.

例如,要了解区域 R1 中的国家总数,它将是 3,R2 = 6,R3 = 5,R4 = 2,R5 = 5,R6 = 4。

For example, to know the total countries in the region R1 it will be 3, R2 = 6, R3 = 5, R4 = 2, R5 = 5 and R6 = 4.

如果我们有区域中面积的文档,则 R1 到 R6 将汇总区域的国家/地区面积。

In case we have documents with area in the region than R1 to R6 will have the country wise area summed up for the region.

Kibana - Create Visualization

我们可以以条形图、折线图、饼图等的格式对数据进行可视化。在本章中,我们将了解如何创建可视化。

We can visualize the data we have in the form of bar charts, line graphs, pie charts etc. In this chapter, we will understand how to create visualization.

Create Visualization

转到 Kibana 可视化,如下所示 −

Go to Kibana Visualization as shown below −

visualization

我们没有创建任何可视化,所以它显示空白,并且有一个按钮用于创建一个可视化。

We do not have any visualization created, so it shows blank and there is a button to create one.

单击屏幕上显示的按钮 Create a visualization ,它会把你带到屏幕,如下所示 −

Click the button Create a visualization as shown in the screen above and it will take you to the screen as shown below −

create visualization

在此,你可以选择可视化数据所需的选项。我们将在接下来的章节中详细了解每一个选项。现在,我们将选择饼图作为开始。

Here you can select the option which you need to visualize your data. We will understand each one of them in detail in the upcoming chapters. Right now will select pie chart to start with.

pie chart

选择可视化类型后,你现在需要选择你想要处理的索引,它会把你带到屏幕,如下所示 −

Once you select the visualization type, now you need to select the index on which you want to work on, and it will take you the screen as shown below −

visualization type

现在我们有了一个默认的饼图。我们将使用 countriesdata-28.12.2018 获取饼图格式中国家数据中可用的区域数量。

Now we have a default pie chart. We will use the countriesdata-28.12.2018 to get the count of regions available in the countries data in pie chart format.

Bucket and Metric Aggregation

左侧有我们选择为计数的指标。在桶中,有两个选项,拆分切片和拆分图表。我们将使用拆分切片选项。

The left side has metrics, which we will select as count. In Buckets, there are 2 options Split slices and split chart. We will use the option Split slices.

bucket metric aggregation

现在,选择“拆分片段”,它将显示以下选项 −

Now, select Split Slices and it will display following options −

split slices

现在,选择聚合项“项”,它将显示更多选项,可以输入如下 −

Now, select the Aggregation as Terms and it will display more options to be entered as follows −

aggregation as terms

字段下拉框将包含从索引 countriesdata 中选择的所有字段。我们已选择 Region 字段和 Order By。请注意,我们已为 Order By 选择 count 指标。我们将按照降序对其排序,大小设定为 10。这意味着,我们将从 countries 索引中获取排名前 10 的区域计数。

The Fields dropdown will have all the field from the index:countriesdata chosen. We have chosen the Region field and Order By. Note that we have chosen, the metric Count for Order By. We will order it Descending and the size we have taken as 10. It means here, we will get the top 10 regions count from the countries index.

现在,单击如下所示的高亮分析按钮,您应会看到右侧已更新的饼图。

Now, click the analyse button as highlighted below and you should see the pie chart updated on right side.

analyse button

Pie chart display

pie chart display

所有区域均列在右上角,并带有颜色,饼图中会显示相同颜色。如果您将鼠标悬停在饼图上,它会显示区域计数和区域名称,如下所示 −

All the regions are listed at the right top corner with colours and the same colour is shown in the pie chart. If you mouse over the pie chart it will give the count of the region and also the name of the region as shown below −

因此,它告诉我们 22.77% 的区域由撒哈拉以南非洲占领,数据是我们上传的国家数据。

So it tells us that 22.77% of region is occupied by Sub-Saharan Afri from the countries data we have uploaded.

countries data
countries data uploaded

亚洲地区占 12.5%,数量为 28。

Asia region covers 12.5% and the count is 28.

现在,我们可以通过单击如下所示的右上角的保存按钮来保存可视化。

Now we can save the visualization by clicking on the save button on top right corner as shown below −

new visualization
save visualization

现在,保存可视化,以便以后使用。

Now, save the visualization so that it can be used later.

我们还可以使用如下所示的搜索选项获取所需数据 −

We can also get the data as we want by using the search option as shown below −

search visualization

我们已筛选出以 Aus * 开头的国家的相关数据。在后续章节中,我们将进一步了解饼图和其他可视化。

We have filtered data for countries starting with Aus*. We will understand more on pie-chart and other visualization in the upcoming chapters.

Kibana - Working With Charts

让我们探索并了解在可视化中使用最广泛的图表。

Let us explore and understand the most commonly used charts in visualization.

  1. Horizontal Bar Chart

  2. Vertical Bar Chart

  3. Pie Chart

以下是创建上述可视化的步骤。让我们从水平条形图开始。

The following are the steps to be followed to create above visualization. Let us start with Horizontal Bar.

Horizontal Bar Chart

打开 Kibana,然后单击左侧的 Visualize(可视化)选项卡,如下图所示 −

Open Kibana and click Visualize tab on left side as shown below −

horizontal bar chart

单击 + 按钮以创建新的可视化 −

Click the + button to create a new visualization −

kibana visualization

单击上面列出的 Horizontal Bar(水平条形图)。你必须选择要可视化的索引。

Click the Horizontal Bar listed above. You will have to make a selection of the index you want to visualize.

horizontal bar list

选择 countriesdata-28.12.2018 索引,如下图所示。选择索引后,将显示如下所示的屏幕 −

Select the countriesdata-28.12.2018 index as shown above. On selecting the index, it displays a screen as shown below −

selecting index

该屏幕显示默认计数。现在,让我们绘制一个水平条形图,其中我们可以看到排名前 10 位国家的总人口。

It shows a default count. Now, let us plot a horizontal graph where we can see the data of top 10 country wise populations.

为此,我们需要选择 Y 和 X 轴上需要显示的内容。因此,选择 Bucket and Metric Aggregation(分组和度量聚合) −

For this purpose, we need to select what we want on the Y and X axis. Hence, select the Bucket and Metric Aggregation −

horizontal graph

现在,如果你单击 Y-Axis(Y 轴),将显示如下所示的屏幕 −

Now, if you click on Y-Axis, it will display the screen as shown below −

click y axis

现在,从这里显示的选项中选择聚合 −

Now, select the Aggregation that you want from the options shown here −

aggregation options

请注意,我们在此处选择最大值聚合,因为我们要根据可用最大人口来显示数据。

Note that here we will select the Max aggregation as we want to display data as per the max population available.

接下来,我们必须选择需要最大值的那个字段。在索引 countriesdata-28.12.2018 中,我们只有 2 个数字字段 - area(面积)和 population(人口)。

Next we have to select the field whose max value is required. In the index countriesdata-28.12.2018, we have only 2 numbers field – area and population.

由于我们要显示最大人口,因此选择 Population(人口)字段,如下图所示 −

Since we want the max population, we select the Population field as shown below −

max population

通过此操作,我们在 Y 轴上完成操作。我们针对 Y 轴获取的输出如下所示:

By this, we are done with the Y-axis. The output that we get for Y-axis is as shown below −

y axis output

现在让我们选择 X 轴,如下所示:

Now let us select the X-axis as shown below −

select x axis

如果您选择 X 轴,那么将得到如下输出:

If you select X-Axis, you will get the following output −

x axis output

选择按词条聚合。

Choose Aggregation as Terms.

choose aggregation terms

从下拉菜单中选择字段。我们想要按照国家获取人口,所以选择国家字段。我们有以下排序选项:

Choose the field from the dropdown. We want country wise population so select country field. Order by we have following options −

dropdown field

我们将按最大人口排序,因为我们希望人口最多的国家优先显示。添加了所需数据后,单击指标数据顶部所示的应用更改按钮,如下所示:

We are going to choose the order by as Max Population as want the country with highest population to be displayed first and so on. Once the data we want is added click on the apply changes button on top of the Metrics data as shown below −

highest population

一旦你单击应用更改,我们将获得一个水平图形,其中我们可以看到中国是人口最多的国家,其次是印度、美国等。

Once you click apply changes, we have the horizontal graph wherein we can see that China is the country with highest population, followed by India, United States etc.

horizontal graph china

类似地,你可以通过选择所需的字段来绘制不同的图形。接下来,我们将此可视化效果保存为 max_population,以便以后用于创建仪表板。

Similarly, you can plot different graphs by choosing the field you want. Next, we will save this visualization as max_population to be used later for Dashboard creation.

在下一部分,我们将创建垂直条形图。

In the next section, we will create vertical bar chart.

Vertical Bar Chart

单击可视化选项卡,并使用垂直条形图和索引 countriesdata-28.12.2018 创建一个新的可视化效果。

Click the Visualize tab and create a new visualization using vertical bar and index as countriesdata-28.12.2018.

在此垂直条形图可视化效果中,我们将创建以国家地区为维度的条形图,即按照区域面积从大到小对国家进行展示。

In this vertical bar visualization, we will create bar graph with countries wise area, i.e. countries will be displayed with highest area.

因此,让我们选择 Y 和 X 轴,如下所示:

So let us select the Y and X axes as shown below −

Y-axis

vertical bar chart

X-axis

vertical bar chart x axis

当我们在这里应用更改时,我们可以看到如下所示的输出:

When we apply the changes here, we can see the output as shown below −

vertical bar chart output

从图形中,我们可以看到俄罗斯拥有最大的面积,其次是加拿大和美国。请注意,此数据是从国家数据索引中提取的,并且是虚拟数据,因此数字可能与实时数据不符。

From the graph, we can see that Russia is having the highest area, followed by Canada and United States. Please note this data is picked from the index countriesdata, and its dummy data, so figures might not be correct with live data.

让我们将此可视化效果保存为 countrywise_maxarea,以便以后与仪表板一起使用。

Let us save this visualization as countrywise_maxarea to be used with dashboard later.

接下来,让我们研究饼图。

Next, let us work on Pie chart.

Pie Chart

因此,首先创建一个可视化效果,然后选择索引为国家数据的饼图。我们要在饼图格式中显示国家数据中可用区域的数量。

So first create a visualization and select the pie chart with index as countriesdata. We are going to display the count of regions available in the countriesdata in pie chart format.

左侧具有提供数量的指标。在存储区中,有 2 个选项:拆分切片和拆分图表。现在,我们将使用拆分切片选项。

The left side has metrics which will give count. In Buckets, there are 2 options: Split slices and split chart. Now, we will use the option Split slices.

pie chart visualization

现在,如果你选择拆分切片,它将显示以下选项:

Now, if you select Split Slices, it will display the following options −

pie chart buckets

选择按词条聚合,它将显示更多选项,输入如下所示:

Select the Aggregation as Terms and it will display more options to be entered as follows −

aggregation terms

字段下拉菜单将包含从所选索引中的所有字段。我们选择了区域字段,并将其排序方式选为计数。我们将按降序排序,大小设置为 10。因此,这里我们将获得国家索引的 10 个区域数量。

The Fields dropdown will have all the fields from the index chosen. We have selected Region field and Order By that we have selected as Count. We will order it Descending and the size will take as 10. So here we will be get the 10 regions count from the countries index.

现在,单击如下所示的高亮显示的播放按钮,你应该会在右侧看到更新后的饼图。

Now, click the play button as highlighted below and you should see the pie chart updated on the right side.

pie chart updated

Pie chart display

pie chart displayed

所有区域均在右上角以颜色列出,并会在饼状图中显示相同颜色。将鼠标悬停在饼状图上,它将提供区域的数量及区域的名称,如下所示 −

All the regions are listed at the right top corner with colours and the same colour is shown in the pie chart. If you mouse over the pie chart, it will give the count of the region and also the name of the region as shown below −

pie chart mouse over
pie chart region

由此,我们可以了解到 Sub-Saharan Afri 占 22.77% 的区域所上传的国家数据。

Thus, it tells us that 22.77% of region is occupied by Sub-Saharan Afri in the countries data we have uploaded.

从饼状图中,观察可知亚洲区域占 12.5%,数量为 28。

From the pie chart, observe that the Asia region covers 12.5% and the count is 28.

现在我们可以单击右上角的保存按钮保存可视化,如下所示 −

Now we can save the visualization by clicking the save button on top right corner as shown below −

asia region

现在,保存可视化以供以后在控制面板中使用。

Now, save the visualization so that it can be used later in dashboard.

visualization dashboard

Kibana - Working With Graphs

在本章中,我们将讨论可视化中使用的两种类型的图表——

In this chapter, we will discuss the two types of graphs used in visualization −

  1. Line Graph

  2. Area

Line Graph

首先,让我们创建一个可视化,选择一条折线图来显示数据并使用 contriesdata 作为索引。我们需要创建 Y 轴和 X 轴,详细信息如下所示——

To start with, let us create a visualization, choosing a line graph to display the data and use contriesdata as the index. We need to create the Y -axis and X-axis and the details for the same are shown below −

For Y-axis

line graph y axis

请注意,我们已采用 Max 作为聚合。因此,这里我们将展示折线图中的数据呈现。现在,我们将绘制图表,展示国家/地区的最高人口。我们选取的字段为人口,因为我们需要按国家/地区划分最大值人口。

Observe that we have taken Max as the Aggregation. So here we are going to show data presentation in line graph. Now,we will plot graph that will show the max population country wise. The field we have taken is Population since we need maximum population country wise.

For X-axis

line graph x axis

在 x 轴上,我们已采用术语作为聚合、Country.keyword 作为字段和指标:Max Population for Order By,顺序大小为 5。因此,它将绘制人口最多的 5 个国家/地区。应用更改后,您可以看到折线图,如下所示——

On x-axis we have taken Terms as Aggregation, Country.keyword as Field and metric:Max Population for Order By, and order size is 5. So it will plot the 5 top countries with max population. After applying the changes, you can see the line graph as shown below −

line graph changes

因此,我们拥有中国人均最多的国家,其次是印度、美国、印度尼西亚和巴西这 5 个人均最多的国家。

So we have Max population in China, followed by India, United States, Indonesia and Brazil as the top 5 countries in population.

现在,让我们保存此折线图,以便稍后在仪表板中使用。

Now, let us save this line graph so that we can use in dashboard later.

save line graph

单击“确认保存”,您便可以保存可视化。

Click Confirm Save and you can save the visualization.

Area Graph

转到可视化并选择以 countriesdata 为索引的区域。我们需要选择 Y 轴和 X 轴。我们将绘制国家/地区的最大面积的面积图。

Go to visualization and choose area with index as countriesdata. We need to select the Y-axis and X-axis. We will plot area graph for max area for country wise.

因此,这里 X 轴和 Y 轴如下所示 −

So here the X- axis and Y-axis will be as shown below −

area graph
area graph axis

单击应用更改按钮后,我们可以看到输出,如下所示 −

After you click the apply changes button, the output that we can see is as shown below −

area graph changes

从该图表中,我们可以看到俄罗斯的面积最大,其次是加拿大、美国、中国和巴西。保存该可视化以供以后使用。

From the graph, we can observe that Russia has the highest area, followed by Canada, United States , China and Brazil. Save the visualization to use it later.

Kibana - Working With Heat Map

在本章中,我们将了解如何使用热图。热图会为数据指标中选定的范围使用不同的颜色,以显示数据呈现。

In this chapter we will understand how to work with heat map. Heat map will show the data presentation in different colours for the range selected in the data metrics.

Getting Started with Heat Map

首先,我们需要通过单击如下所示的左侧可视化选项卡来创建可视化 −

To start with, we need to create visualization by clicking on the visualization tab on the left side as shown below −

heat map visualization

如上所示,选择可视化类型作为热图。它会要求您选择如下所示的索引 −

Select visualization type as heat map as shown above. It will ask you to choose the index as shown below −

heat map index

如上所示,选择索引 countriesdata-28.12.2018。一旦选择索引,我们就可以选择如下所示的数据 −

Select the index countriesdata-28.12.2018 as shown above. Once the index is selected the we have the data to be selected as shown below −

heat map index selected

如上所示,选择指标 −

Select the Metrics as shown below −

heat map metrics

如上所示,从下拉框中选择 Max 聚合。

Select Max Aggregation from dropdown as shown below −

heat map max aggregation

由于我们希望按国家绘制 Max 面积,因此我们选择 Max。

We have select Max since we want to plot Max Area country wise.

现在将选择如下所示的区间值 −

Now will select the values for Buckets as shown below −

heat map buckets

现在,让我们选择 X 轴,如下所示 −

Now, let us select the X-Axis as shown below −

heat map x axis

我们将聚合用作词条、字段用作国家且按最大区域排序。单击应用更改,如下所示 −

We have used Aggregation as Terms, Field as Country and Order By Max Area. Click on Apply Changes as shown below −

heat map max area

如果您单击“应用更改”,则热图如下所示 −

If you click Apply Changes, the heat map looks as shown below −

heat map changes

热图以不同的颜色显示,区域范围显示在右侧。您可以通过单击区域范围旁边的圆圈来更改颜色,如下所示 −

The heat map is shown with different colours and the range of areas are displayed at the right side. You can change the colour by click on the small circles next to the area range as shown below −

heat map displayed

Kibana - Working With Coordinate Map

Kibana 中的坐标地图将显示地理区域,并根据您指定的聚合用圆圈标记区域。

Coordinate maps in Kibana will show you the geographic area and mark the area with circles based on aggregation you specify.

Create Index for Coordinate Map

用于坐标地图的 Bucket 聚合是 geohash 聚合。对于这种聚合类型,您要使用的索引应具有地理点类型字段。地理点是纬度和经度的组合。

The Bucket aggregation used for coordinate map is geohash aggregation. For this type of aggregation, your index which you are going to use should have a field of type geo point. The geo point is combination of latitude and longitude.

我们将使用 Kibana 开发工具创建一个索引,并向其中添加批量数据。我们将添加映射并添加所需的 geo_point 类型。

We will create an index using Kibana dev tools and add bulk data to it. We will add mapping and add the geo_point type that we need.

我们要使用的数据如下所示 −

The data that we are going to use is shown here −

{"index":{"_id":1}}
{"location": "2.089330000000046,41.47367000000008", "city": "SantCugat"}
{"index":{"_id":2}}
{"location": "2.2947825000000677,41.601800991000076", "city": "Granollers"}
{"index":{"_id":3}}
{"location": "2.1105957495300474,41.5496295760424", "city": "Sabadell"}
{"index":{"_id":4}}
{"location": "2.132605678083895,41.5370461908878", "city": "Barbera"}
{"index":{"_id":5}}
{"location": "2.151270020052683,41.497779918345415", "city": "Cerdanyola"}
{"index":{"_id":6}}
{"location": "2.1364609496220606,41.371303520399344", "city": "Barcelona"}
{"index":{"_id":7}}
{"location": "2.0819450306711165,41.385491966414705", "city": "Sant Just Desvern"}
{"index":{"_id":8}}
{"location": "2.00532082278266,41.542294286427385", "city": "Rubi"}
{"index":{"_id":9}}
{"location": "1.9560805366930398,41.56142635214226", "city": "Viladecavalls"}
{"index":{"_id":10}}
{"location": "2.09205348251486,41.39327140161001", "city": "Esplugas de Llobregat"}

现在,在 Kibana 开发工具中运行以下命令,如下所示 −

Now, run the following commands in Kibana Dev Tools as shown below −

PUT /cities
{
   "mappings": {
      "_doc": {
         "properties": {
            "location": {
               "type": "geo_point"
            }
         }
      }
   }
}

POST /cities/_city/_bulk?refresh
{"index":{"_id":1}}
{"location": "2.089330000000046,41.47367000000008", "city": "SantCugat"}
{"index":{"_id":2}}
{"location": "2.2947825000000677,41.601800991000076", "city": "Granollers"}
{"index":{"_id":3}}
{"location": "2.1105957495300474,41.5496295760424", "city": "Sabadell"}
{"index":{"_id":4}}
{"location": "2.132605678083895,41.5370461908878", "city": "Barbera"}
{"index":{"_id":5}}
{"location": "2.151270020052683,41.497779918345415", "city": "Cerdanyola"}
{"index":{"_id":6}}
{"location": "2.1364609496220606,41.371303520399344", "city": "Barcelona"}
{"index":{"_id":7}}
{"location": "2.0819450306711165,41.385491966414705", "city": "Sant Just Desvern"}
{"index":{"_id":8}}
{"location": "2.00532082278266,41.542294286427385", "city": "Rubi"}
{"index":{"_id":9}}
{"location": "1.9560805366930398,41.56142635214226", "city": "Viladecavalls"}
{"index":{"_id":10}}
{"location": "2.09205348251486,41.3s9327140161001", "city": "Esplugas de Llobregat"}

现在,在 Kibana 开发工具中运行以上命令 −

Now, run the above commands in Kibana dev tools −

kibana dev tools

上述将创建类型为 _doc 的索引名称城市,字段位置为类型 geo_point。

The above will create index name cities of type _doc and the field location is of type geo_point.

现在,让我们向索引添加数据:城市 −

Now let’s add data to the index: cities −

kibana index name

我们已完成创建名称为 cities พร้อม数据的索引。现在,让我们使用管理选项卡为城市创建索引模式。

We are done creating index name cites with data. Now let us Create index pattern for cities using Management tab.

kibana index pattern

此处显示了 cities 索引中字段的详细信息 −

The details of fields inside cities index are shown here −

kibana index details

我们可以看到 location 是类型 geo_point。我们现在可以使用它来创建可视化。

We can see that location is of type geo_point. We can now use it to create visualization.

Getting Started with Coordinate Maps

转至可视化并选择坐标地图。

Go to Visualization and select coordinate maps.

coordinate maps

选择索引模式城市,并按如下所示配置聚合指标和桶 −

Select the index pattern cities and configure the Aggregation metric and bucket as shown below −

configure aggregation metric

如果您单击“分析”按钮,则可以看到以下屏幕 −

If you click on Analyze button, you can see the following screen −

analyze button

根据经度和纬度,圆圈绘制在地图上,如上所示。

Based on the longitude and latitude, the circles are plotted on the map as shown above.

Kibana - Working With Region Map

通过此可视化,您可以在世界地理地图上看到表示的数据。在本章中,让我们详细了解它。

With this visualization, you see the data represented on the geographical world map. In this chapter, let us see this in detail.

Create Index for Region Map

我们创建一个新索引,以便使用区域地图可视化工具。我们即将上传的数据如下所示 −

We will create a new index to work with region map visualization. The data that we are going to upload is shown here −

{"index":{"_id":1}}
{"country": "China", "population": "1313973713"}
{"index":{"_id":2}}
{"country": "India", "population": "1095351995"}
{"index":{"_id":3}}
{"country": "United States", "population": "298444215"}
{"index":{"_id":4}}
{"country": "Indonesia", "population": "245452739"}
{"index":{"_id":5}}
{"country": "Brazil", "population": "188078227"}
{"index":{"_id":6}}
{"country": "Pakistan", "population": "165803560"}
{"index":{"_id":7}}
{"country": "Bangladesh", "population": "147365352"}
{"index":{"_id":8}}
{"country": "Russia", "population": "142893540"}
{"index":{"_id":9}}
{"country": "Nigeria", "population": "131859731"}
{"index":{"_id":10}}
{"country": "Japan", "population": "127463611"}

请注意,我们将在 dev 工具中使用 _bulk 上载数据。

Note that we will use _bulk upload in dev tools to upload the data.

现在,转到 Kibana Dev 工具并执行以下查询 −

Now, go to Kibana Dev Tools and execute following queries −

PUT /allcountries
{
   "mappings": {
      "_doc": {
         "properties": {
            "country": {"type": "keyword"},
               "population": {"type": "integer"}
         }
      }
   }
}
POST /allcountries/_doc/_bulk?refresh
{"index":{"_id":1}}
{"country": "China", "population": "1313973713"}
{"index":{"_id":2}}
{"country": "India", "population": "1095351995"}
{"index":{"_id":3}}
{"country": "United States", "population": "298444215"}
{"index":{"_id":4}}
{"country": "Indonesia", "population": "245452739"}
{"index":{"_id":5}}
{"country": "Brazil", "population": "188078227"}
{"index":{"_id":6}}
{"country": "Pakistan", "population": "165803560"}
{"index":{"_id":7}}
{"country": "Bangladesh", "population": "147365352"}
{"index":{"_id":8}}
{"country": "Russia", "population": "142893540"}
{"index":{"_id":9}}
{"country": "Nigeria", "population": "131859731"}
{"index":{"_id":10}}
{"country": "Japan", "population": "127463611"}

接下来,让我们创建索引 allcountries。我们指定的国家字段类型为 keyword

Next, let us create index allcountries. We have specified the country field type as keyword

PUT /allcountries
{
   "mappings": {
      "_doc": {
         "properties": {
            "country": {"type": "keyword"},
            "population": {"type": "integer"}
         }
      }
   }
}

Note − 为了使用区域地图,我们需要指定字段类型,该字段类型应与聚合一起使用,且类型为关键字。

Note − To work with region maps we need to specify the field type to be used with aggregation as type as keyword.

kibana region maps

完成后,使用 _bulk 命令上传数据。

Once done, upload the data using _bulk command.

kibana using bulk

现在我们将创建索引模式。转到 Kibana 管理选项卡并选择创建索引模式。

We will now create index pattern. Go to Kibana Management tab and select create index pattern.

kibana management tab

下面将显示来自 allcountries 索引的字段。

Here are the fields displayed from allcountries index.

displayed allcountries index

Getting Started with Region Maps

现在,我们使用区域地图创建可视化效果。转到可视化效果并选择区域地图。

We will now create the visualization using Region Maps. Go to Visualization and select Region Maps.

visualization using region maps

完成后,选择索引 allcountries 并继续。

Once done select index as allcountries and proceed.

按照以下所示选择聚合指标和存储桶指标 −

Select Aggregation Metrics and Bucket Metrics as shown below −

select aggregation metrics
bucket metrics

在此,我们选择字段为国家,因为我想在世界地图上显示与此相同的字段。

Here we have selected field as country, as i want to show the same on the world map.

Vector Map and Join Field for Region Map

对于区域地图,还需要选择选项选项卡,如下所示 −

For region maps we need to also select Option tabs as shown below −

vector map

此选项选项卡具有图层设置配置,需要这些配置将数据标识到世界地图中。

The options tab has Layer Settings configuration which are required to plot the data on the world map.

矢量地图具有以下选项 −

A Vector Map has the following options −

vector map options

在此,我们选择世界各国,因为我具有各个国家地区的数据。

Here we will select world countries as i have countries data.

Join Field 具有以下详细信息 −

The Join Field has following details −

join field

在我们索引中,我们有国家名称,因此我们将选择国家名称。

In our index we have the country name, so we will select country name.

在样式设置中,您可以选择国家/地区要显示的颜色 −

In Style settings you can choose the color to be displayed for the countries −

style settings

我们选择红色。我们不会触及其他详细信息。

We will select Reds. We will not touch the rest of the details.

现在,单击“分析”按钮,查看如下显示在地图上的国家地区详细信息 −

Now,click on Analyze button to see the details of the countries plotted on the world map as shown below −

click analyze button

Self-hosted Vector Map and Join Field in Kibana

您还可以为矢量地图和联接字段添加您自己的 Kibana 设置。为此,从 Kibana 配置文件夹中转至 kibana.yml,并添加以下详细信息 −

You can also add your own Kibana settings for vector map and join field. To do that go to kibana.yml from the kibana config folder and add the following details −

regionmap:
   includeElasticMapsService: false
   layers:
      - name: "Countries Data"
      url: "http://localhost/kibana/worldcountries.geojson"
      attribution: "INRAP"
      fields:
         - name: "Country"
         description: "country names"

选项选项卡中的矢量地图将使用上述数据填充,而不是默认数据。请注意,必须启用给出的 URL 的 CORS,以便 Kibana 可以下载它。所使用的 JSON 文件应使坐标连续。例如:−

The vector map from options tab will have the above data populated instead of the default one. Please note the URL given has to be CORS enabled so that Kibana can download the same. The json file used should be in such a way that the coordinates are in continuation. For example −

当 region-map 矢量地图详细信息自托管时,选项选项卡如下所示 −

The options tab when region-map vector map details are self-hosted is shown below −

vector map details

Kibana - Working With Guage And Goal

仪表可视化会告知你针对数据考虑的指标如何落在预定义范围内。

A gauge visualization tells how your metric considered on the data falls in the predefined range.

目标可视化描述了你的目标以及你的数据指标如何向该目标进行。

A goal visualization tells about your goal and how your metric on your data progresses towards the goal.

Working with Gauge

如需开始使用 Gauge,请转到可视化,并从 Kibana UI 中选择可视化选项卡。

To start using Gauge, go to visualization and select Visualize tab from Kibana UI.

working with gauge

点击 Gauge,并选择你想要使用的索引。

Click on Gauge and select the index you want to use.

click on gauge

我们将在 medicalvisits-26.01.2019 索引上工作。

We are going to work on medicalvisits-26.01.2019 index.

选择 2017 年 2 月的时间范围。

Select the time range of February 2017

time range of february

现在你可以选择指标和存储桶聚合。

Now you can select the metric and bucket aggregation.

metric bucket aggregation

我们选择了计数作为指标聚合。

We have selected the metric aggregation as Count.

metric aggregation count

我们选择的桶聚合是条件和选择的字段是 Number_Home_Visits。

The bucket aggregation we have selected Terms and the field selected is Number_Home_Visits.

从数据选项选项卡中,选择显示在下面的选项−

From Data options Tab, the options selected are shown below −

data options tab

Gauge 类型可以是圆或弧。我们选择了弧,其他所有值均为默认值。

Gauge Type can be in the form of circle or arc. We have selected as arc and rest all others as the default values.

我们添加的预定义范围显示在这里 −

The predefined range we have added is shown here −

gauge default values

选择的颜色是绿到红。

The colour selected is Green To Red.

现在,点击分析按钮,以 Gauge 形式查看可视化,如下所示 −

Now, click on Analyze Button to see the visualization in the form of Gauge as shown below −

visualization of gauge

Working with Goal

转到可视化选项卡,并选择目标,如下所示 −

Go to Visualize Tab and select Goal as shown below −

visualize tab

选择目标并选择索引。

Select Goal and select the index.

使用 medicalvisits-26.01.2019 作为索引。

Use medicalvisits-26.01.2019 as the index.

select goal index

选择指标聚合和桶聚合。

Select the metric aggregation and bucket aggregation.

Metric Aggregation

gauge metric aggregation

我们选择了计数作为指标聚合。

We have selected Count as the metric aggregation.

Bucket Aggregation

gauge bucket aggregation

我们选择了条件作为桶聚合,字段是 Number_Home_Visits。

We have selected Terms as the bucket aggregation and field is Number_Home_Visits.

选择的选项如下 −

The options selected are as follows −

number home visits

选择的范围如下 −

The Range selected is as follows −

gauge range selected

点击分析,您会看到目标显示如下 −

Click on Analyze and you see the goal displayed as follows −

gauge goal display

Kibana - Working With Canvas

画布是 Kibana 的另一项强大功能。使用画布可视化,您可以用不同颜色组合、形状、文本、多页设置等表示您的数据。

Canvas is yet another powerful feature in Kibana. Using canvas visualization, you can represent your data in different color combination, shapes, text, multipage setup etc.

我们需要数据来显示在画布中。现在,让我们加载一些 Kibana 中已有的示例数据。

We need data to show in the canvas. Now, let us load some sample data already available in Kibana.

Loading Sample Data for Canvas Creation

要获取示例数据,请转到 Kibana 主页并单击“添加示例数据”,如下所示:

To get the sample data go to Kibana home page and click on Add sample data as shown below −

kibana home page

单击“加载数据集和 Kibana 仪表板”。这会将您带到如下图所示的屏幕:

Click on Load a data set and a Kibana dashboard. It will take you to the screen as shown below −

kibana dashboard

单击示例电子商务订单的“添加”按钮。加载示例数据需要一些时间。完成后,您将收到一条警报消息,“示例电子商务数据已加载”。

Click on Add button for Sample eCommerce orders. It will take some time to load the sample data. Once done you will get an alert message showing “ Sample eCommerce data loaded.”

Getting Started with Canvas Visualization

现在,转到画布可视化,如下所示:

Now go to Canvas Visualization as shown below −

canvas visualization

单击“画布”,它将显示如下图所示的屏幕:

Click on Canvas and it will display screen as shown below −

canvas workpads

我们已经添加了电子商务和 Web 流量示例数据。我们可以创建新的工作区或使用现有工作区。

We have eCommerce and Web Traffic sample data added. We can create new workpad or use the existing one.

在这里,我们将选择现有的工作区。选择电子商务收入追踪工作区名称,它将显示如下图所示的屏幕:

Here, we will select the existing one. Select eCommerce Revenue Tracking Workpad Name and it will display the screen as shown below −

ecommerce revenue tracking workpad

Cloning an Existing Workpad in Canvas

我们将克隆工作区,以便对其进行更改。要克隆现有工作区,请单击左下方显示的工作区名称:

We will clone the workpad so that we can make changes to it. To clone an existing workpad, click on the name of the workpad shown at the bottom left −

cloning existing workpad canvas

单击名称并选择“克隆”选项,如下所示:

Click on the name and select clone option as shown below −

select clone option

单击克隆按钮,它将创建电子商务收入追踪工作区的副本。您可以按以下方式找到它:

Click on the clone button and it will create a copy of the eCommerce Revenue Tracking workpad. You can find it as shown below −

cloning existing workpad canvas

在本部分中,让我们了解如何使用工作区。如果您看到上面的工作区,它有 2 页。因此,在画布中,我们可以在多页中表示数据。

In this section, let us understand how to use the workpad. If you see above workpad, there are 2 pages for it. So in canvas we can represent the data in multiple pages.

第 2 页显示如下:

The page 2 display is as shown below −

canvas multiple pages

选择第 1 页,然后单击左侧显示的“总销售额”,如下所示:

Select Page 1 and click on the Total sales displayed on left side as shown below −

cloning existing workpad canvas

在右侧,您将获得与之相关的数据:

On the right side, you will get the data related to it −

cloning related data

现在使用的默认样式是绿色。我们可以在此处更改颜色,并检查相同内容的显示。

Right now the default style used is green colour. We can change the colour here and check the display of same.

change colour

我们还更改了文本设置的字体和大小,如下所示:

We have also changed the font and size for text settings as shown below −

change font size

Adding New Page to Workpad Inside Canvas

要向工作区添加新页面,请执行以下操作:

To add new page to the workpad, do as shown below −

workpad inside canvas

当页面创建如下所示 −

Once the page is created as shown below −

workpad page created

单击“添加元素”,它将显示所有可能的可视化,如下所示 −

Click on Add element and it will display all possible visualization as shown below −

display possible visualization

我们已经添加了“数据表”和“面积图”两个元素,如下所示

We have added two elements Data table and Area Chart as shown below

data table area chart

您可以在同一页面中添加更多数据元素,或者添加更多页面。

You can add more data elements to the same page or add more pages too.

Kibana - Create Dashboard

在之前的章节中,我们已经看到了如何创建垂直条形、水平条形、饼图等形式的可视化。在本章中,让我们学习如何将它们组合成仪表板。仪表板是由您创建的可视化集合,因此您可以同时查看它们。

In our previous chapters, we have seen how to create visualization in the form of vertical bar, horizontal bar, pie chart etc. In this chapter, let us learn how to combine them together in the form of Dashboard. A dashboard is collection of your visualizations created, so that you can take a look at it all together at a time.

Getting Started with Dashboard

要在 Kibana 中创建仪表板,请单击“仪表板”选项,如下所示 −

To create Dashboard in Kibana, click on the Dashboard option available as shown below −

create dashboard

现在,单击“创建新仪表板”按钮,如下所示。它将带我们到如下所示的屏幕 −

Now, click on Create new dashboard button as shown above. It will take us to the screen as shown below −

new dashboard button

请注意,到目前为止,我们还没有创建任何仪表板。顶部有选项,我们可以在其中保存、取消、添加、选项、共享、自动刷新,还可以更改时间以获取仪表板上的数据。我们将单击上面显示的“添加”按钮来创建一个新仪表板。

Observe that we do not have any dashboard created so far. There are options at the top where we can Save, Cancel, Add, Options, Share, Auto-refresh and also change the time to get the data on our dashboard. We will create a new dashboard, by clicking on the Add button shown above.

Add Visualization to Dashboard

当我们单击“添加”按钮(左上角)时,它会显示我们创建的可视化,如下所示 −

When we click the Add button (top left corner), it displays us the visualization we created as shown below −

add visualization dashboard

选择您要添加到仪表板的可视化。我们将选择前三个可视化,如下所示 −

Select the visualization you want to add to your dashboard. We will select the first three visualizations as shown below −

first three visualizations

这是它们在屏幕上同时显示的方式 −

This is how it is seen on the screen together −

on screen together

因此,作为用户,您能够获得我们上传的数据的总体详细信息——按国家/地区划分,包括国家/地区名称、区域名称、面积和人口字段。

Thus, as a user you are able to get the overall details about the data we have uploaded – country wise with fields country-name, regionname, area and population.

所以现在我们知道了所有可用的区域,按降序排列的每个国家的人口最多、面积最大的区域等等。

So now we know all the regions available, the max population country wise in descending order, the max area etc.

这只是我们上传的示例数据可视化,但在实际中,跟踪您业务的详细信息变得非常容易,例如您有一个每月或每天获得数百万次点击的网站,您想要跟踪每天、每小时、每分钟、每秒完成的销售额,并且如果您有 ELK 堆栈,Kibana 可以按您需要的方式每小时、每分钟、每秒在您眼前显示您的销售可视化。它会显示实际世界中正在发生的实时数据。

This is just the sample data visualization we uploaded, but in real world it becomes very easy to track the details of your business like for example you have a website which gets millions of hits monthly or daily, you want to keep a track on the sales done every day, hour, minute, seconds and if you have your ELK stack in place Kibana can show you your sales visualization right in front of your eyes every hour, minute, seconds as you want to see. It displays the real time data as it is happening in the real world.

总的来说,Kibana 在按天、按小时或每分钟提取有关您业务交易的准确详细信息方面发挥着非常重要的作用,因此公司知道进展如何。

Kibana, on the whole, plays a very important role in extracting the accurate details about your business transaction day wise, hourly or every minute, so the company knows how the progress is going on.

Save Dashboard

您可以使用顶部的“保存”按钮保存仪表板。

You can save your dashboard by using the save button at the top.

save dashboard

有一个标题和描述,您可以在其中输入仪表板的名称和简短描述,说明仪表板的作用。现在,单击“确认保存”以保存仪表板。

There is a title and description where you can enter the name of the dashboard and a short description which tells what the dashboard does. Now, click on Confirm Save to save the dashboard.

Changing Time Range for Dashboard

目前,您会看到显示的数据是过去 15 分钟的数据。请注意,这是一个没有时间字段的静态数据,因此显示的数据不会改变。当您将数据连接到实时系统时,更改时间也将显示反映的数据。

At present you can see the data shown is of Last 15 minutes. Please note this is a static data without any time field so the data displayed will not change. When you have the data connected to real time system changing the time, will also show the data reflecting.

默认情况下,您会看到“过去 15 分钟”,如下所示 −

By default, you will see Last 15 minutes as shown below −

time range dashboard

单击“过去 15 分钟”,然后将显示您可选择的时段范围。

Click on the Last 15 minutes and it will display you the time range which you can select as per your choice.

请注意,有快速、相对、绝对和最新选项。以下屏幕截图显示了快速选项的详细信息 −

Observe that there are Quick, Relative, Absolute and Recent options. The following screenshot shows the details for Quick option −

details quick option

现在,单击相对以查看可用的选项 −

Now, click on Relative to see the option available −

click on relative

您可以在此处指定分钟、小时、秒、月、年前的起始和结束日期。

Here you can specify the From and To date in minutes , hours, seconds, months, years ago.

绝对选项具有以下详细信息 −

The Absolute option has the following details −

from and to date

您可以看到日历选项,并可以选择一个日期范围。

You can see the calendar option and can select a date range.

“最新”选项将回传“过去 15 分钟”选项以及您最近选择的其他选项。选择时段范围将更新位于该时段范围内的数据。

The recent option will give back the Last 15 minutes option and also other option which you have selected recently. Choosing the time range will update the data coming within that time range.

Using Search and Filter in Dashboard

我们还可以在仪表板上使用搜索和筛选器。在搜索中,假设如果我们需要获取特定区域的详细信息,我们可以添加搜索,如下所示 −

We can also use search and filter on the dashboard. In search suppose if we want to get the details of a particular region, we can add a search as shown below −

filter in dashboard

在上述搜索中,我们使用了 Region(区域)字段,并且需要显示区域为 OCEANIA(大洋洲)的详细信息。

In the above search, we have used the field Region and want to display the details of region:OCEANIA.

我们获得以下结果 −

We get following results −

dashboard results

根据以上数据,我们可以说在大洋洲地区,澳大利亚人口最多,面积最大。

Looking at the above data we can say that in OCEANIA region, Australia has the max population and Area.

max population area

类似地,我们也可以添加筛选器,如下所示 −

Similarly, we can add a filter as shown below −

add filter
dashboard filter

接下来,单击“添加筛选器”按钮,它将显示索引中提供的字段的详细信息,如下所示 −

Next, click on Add a filter button and it will display the details of the field available in your index as shown below −

details field

选择您要在其上进行筛选的字段。我将使用 Region(区域)字段来获取亚洲地区详细信息,如下所示 −

Choose the field you want to filter on. I will use Region field to get the details of ASIA region as shown below −

保存筛选器,您应该会看到以下筛选器 −

Save the filter and you should see the filter as follows −

asia region details

现在,数据将按所添加的筛选器显示 −

The data will now be shown as per the filter added −

filter added

您还可以添加更多筛选器,如下所示 −

You can also add more filters as shown below −

add more filters

您可以单击“禁用”复选框来禁用筛选器,如下所示。

You can disable the filter by clicking on the disable checkbox as shown below.

disable checkbox

您可以单击同一复选框以将其激活并激活筛选器。请注意,有一个删除按钮可用于删除筛选器。编辑按钮可用于编辑筛选器或更改筛选器选项。

You can activate the filter by clicking on the same checkbox to activate it. Observe that there is delete button to delete the filter. Edit button to edit the filter or change the filter options.

对于显示的可视化效果,您将看到如下所示的三个点 -

For the visualization displayed, you will notice three dots as shown below −

notice three dots

单击并显示如下所示的选项 -

Click on it and it will display options as shown below −

display options

Inspect and Fullscreen

单击“检查”,在表格格式中为区域提供详细信息,如下所示 -

Click on Inspect and it gives the details of the region in tabular format as shown below −

region tabular format

还有一个选项可以以 CSV 格式下载可视化效果,以防您想在 Excel 表格中查看它。

There is an option to download the visualization in CSV format in-case you want to see it in excel sheet.

下一个选项全屏将以全屏模式获取可视化效果,如下所示 -

The next option fullscreen will get the visualization in a fullscreenmode as shown below −

visualization fullscreenmode

您可以使用该按钮退出全屏模式。

You can use the same button to exit the fullscreen mode.

Sharing Dashboard

我们可以使用共享按钮共享仪表盘。单击共享按钮后,您将获得以下显示 -

We can share the dashboard using the share button. Onclick of share button, you will get display as follows −

dashboard share button

您还可以使用嵌入代码在您的网站上显示仪表盘或使用永久链接,永久链接将是与他人共享的链接。

You can also use embed code to show the dashboard on your site or use permalinks which will be a link to share with others.

dashboard permalinks

URL 将如下所示 -

The url will be as follows −

http://localhost:5601/goto/519c1a088d5d0f8703937d754923b84b

Kibana - Timelion

Timelion,也称为时间轴,是另一种可视化工具,它主要用于基于时间的数据分析。若要使用时间轴,我们需要使用简单的表达式语言,这将帮助我们连接到索引,并在数据上执行计算以获得我们需要的结果。

Timelion, also called as timeline is yet another visualization tool which is mainly used for time based data analysis. To work with timeline, we need to use simple expression language which will help us connect to the index and also perform calculations on the data to get the results we need.

Where can we use Timelion?

当您想比较与时间相关的数据时,可以使用 Timelion。例如,您有一个网站,并且您每天都会获得浏览量。您想分析数据,其中您要将本周数据与上周进行比较,即星期一到星期一,星期二到星期二,依此类推,了解浏览量和流量是如何不同的。

Timelion is used when you want to compare time related data. For example, you have a site, and you get your views daily. You want to analyse the data wherein you want to compare the current week data with previous week, i.e. Monday-Monday, Tuesday -Tuesday and so on how the views are differing and also the traffic.

Getting Started with Timelion

要开始使用 Timelion,请点击 Timelion,如下所示 −

To start working with Timelion, click on Timelion as shown below −

started with timelion

Timelion 默认显示所有索引的时间轴,如下所示 −

Timelion by default shows the timeline of all indexes as shown below −

timelion indexes

Timelion 使用表达式语法。

Timelion works with expression syntax.

Note − es(*) ⇒ 表示所有索引。

Note − es(*) ⇒ means all indexes.

要获取可用于 Timelion 的函数详细信息,只需点击文本区域,如下所示 −

To get the details of function available to be used with Timelion, simply click on the textarea as shown below −

click textarea

它会为您提供可用于表达式语法的函数列表。

It gives you the list of function to be used with the expression syntax.

当您开始使用 Timelion 时,它会显示一个欢迎信息,如下所示。强调的部分,即跳转到函数引用,给出了可用于 Timelion 的所有函数的详细信息。

Once you start with Timelion, it displays a welcome message as shown below. The highlighted section i.e. Jump to the function reference, gives the details of all the functions available to be used with timelion.

Timelion Welcome Message

Timelion 欢迎信息如下所示 −

The Timelion welcome message is as shown below −

welcome message

点击下一步按钮,它将引导您了解其基本功能和用法。现在,当您点击下一步时,您可以看到以下详细信息 −

Click on the next button and it will walk you through its basic functionality and usage. Now when you click Next, you can see the following details −

timelion basic functionality
querying elasticsearch datasource
expressing elasticsearch datasource
transforming data

Timelion Function Reference

点击帮助按钮,以获取 Timelion 可提供的函数引用的详细信息 −

Click on Help button to get the details of the function reference available for Timelion −

function reference

Timelion Configuration

Timelion 的设置在 Kibana 管理 → 高级设置中完成。

The settings for timelion is done in Kibana Management → Advanced Settings.

timelion configuration

点击高级设置并从类别中选择 Timelion

Click on Advanced Settings and select Timelion from Category

timelion category

选择 Timelion 后,它将显示 timelion 配置所需的所有必要字段。

Once Timelion is selected it will display all the necessary fields required for timelion configuration.

timelion necessary fields

在以下字段中,您可以更改要在索引上使用的默认索引和时间字段 −

In the following fields you can change the default index and the timefield to be used on the index −

timelion timefield

默认设置是 _all,时间字段是 @timestamp。我们保留原样,在时间表本身更改索引和时间字段。

The default one is _all and timefield is @timestamp. We would leave it as it is and change the index and timefield in the timelion itself.

Using Timelion to Visualize Data

我们将使用索引:medicalvisits-26.01.2019。以下是从 2017 年 1 月 1 日至 2017 年 12 月 31 日在时间表中显示的数据 -

We are going to use index:medicalvisits-26.01.2019. The following is the data displayed from timelion for 1st Jan 2017 to 31st Dec 2017 −

timelion display

用于上述可视化的表达式如下 -

The expression used for above visualization is as follows −

.es(index=medicalvisits-26.01.2019,timefield=Visiting_Date).bars()

我们使用了索引 medicalvisits-26.01.2019,该索引中的时间字段是 Visiting_Date,并使用了条形图功能。

We have used the index medicalvisits-26.01.2019 and timefield on that index is Visiting_Date and used bars function.

下面我们按天分析了 2017 年 1 月份的 2 个城市。

In the following we have analyzed 2 cities for the month of jan 2017, day wise.

timelion analyzed

使用表达式为:

The expression used is −

.es(index=medicalvisits-26.01.2019,timefield=Visiting_Date,
q=City:Sabadell).label(Sabadell),.es(index=medicalvisits-26.01.2019,
timefield=Visiting_Date, q=City:Terrassa).label(Terrassa)

此处显示了 2 天的时间表对比 -

The timeline comparison for 2 days is shown here −

Expression

.es(index=medicalvisits-26.01.2019,timefield=Visiting_Date).label("August 2nd 2018"),
.es(index=medicalvisits-26.01.2019,timefield=Visiting_Date,offset=-1d).label("August 1st 2018")

这里我们使用偏移量并且给出了 1 天的差异。我们选择了 2018 年 8 月 2 日作为当前日期。因此它给出 2018 年 8 月 2 日和 2018 年 8 月 1 日的数据差异。

Here we have used offset and given a difference of 1day. We have selected the current date as 2nd August 2018. So it gives data difference for 2nd Aug 2018 and 1st Aug 2018.

timelion comparison

2017 年 1 月份排名前 5 位的城市数据列表如下。我们在此处使用的表达式如下:

The list of top 5 cities data for the month of Jan 2017 is shown below. The expression that we have used here is given below −

.es(index=medicalvisits-26.01.2019,timefield=Visiting_Date,split=City.keyword:5)
list of top cities

我们使用了拆分并给出了城市作为字段名称,并且由于我们需要来自索引的前 5 位城市,因此我们给出了 split=City.keyword:5

We have used split and given the field name as city and the since we need top five cities from the index we have given it as split=City.keyword:5

它给出了每个城市的数量,并列出它们的名字,如作图中所示。

It gives the count of each city and lists their names as shown in the graph plotted.

Kibana - Dev Tools

我们可以使用 Dev Tools 在 Elasticsearch 中上传数据,而无需使用 Logstash。我们可以发布、放入、删除、搜索我们在 Kibana 中使用 Dev Tools 想要的数据。

We can use Dev Tools to upload data in Elasticsearch, without using Logstash. We can post, put, delete, search the data we want in Kibana using Dev Tools.

若要创建 Kibana 中的新索引,我们可以在 dev 工具中使用以下命令 -

To create new index in Kibana we can use following command in dev tools −

Create Index USING PUT

创建索引的命令如下所示 -

The command to create index is as shown here −

PUT /usersdata?pretty

执行此命令后,将创建一个空索引用户数据。

Once you execute this, an empty index userdata is created.

create index using put

已完成索引创建。现在将添加索引中的数据 -

We are done with the index creation. Now will add the data in the index −

Add Data to Index Using PUT

您可以如下向索引中添加数据 -

You can add data to an index as follows −

add data using put
add data index

我们将向用户数据索引中添加一个记录 -

We will add one more record in usersdata index −

usersdata index

因此,我们在用户数据索引中有 2 个记录。

So we have 2 records in usersdata index.

Fetch Data from Index Using GET

我们可以如下获得记录 1 的详细信息 -

We can get the details of record 1 as follows −

index using get

您可以如下获得所有记录 -

You can get all records as follows −

get all records

所以,我们可以像上面所示获得来自 usersdata 的所有记录。

Thus, we can get all the records from usersdata as shown above.

Update data in Index using PUT

要更新记录,您可以执行以下操作:

To update the record, you can do as follows −

update index using put

我们将名称从“Ervin Howell”更改为“Clementine Bauch”。现在,我们可以从索引获取所有记录并如下所示查看更新的记录:

We have changed the name from “Ervin Howell” to “Clementine Bauch”. Now we can get all records from the index and see the updated record as follows −

update record

Delete data from index using DELETE

您可以如下所示删除记录:

You can delete the record as shown here −

index using delete

现在,如果您查看总记录,我们将只看到一条记录:

Now if you see the total records we will have only one record −

我们可以如下所示删除创建的索引:

We can delete the index created as follows −

index using deleted
delete index created

现在,如果你检查可用的索引,我们将不会在其中找到 usersdata 索引,因为已经删除了索引。

Now if you check the indices available we will not have usersdata index in it as deleted the index.

Kibana - Monitoring

Kibana 监控提供了有关 ELK 堆栈性能的详细信息。我们可以获取所用内存、响应时间等的详细信息。

Kibana Monitoring gives the details about the performance of ELK stack. We can get the details of memory used, response time etc.

Monitoring Details

要获取 Kibana 中的监控详细信息,请单击如下所示的监控选项卡 −

To get monitoring details in Kibana, click on the monitoring tab as shown below −

kibana monitoring

由于我们是首次使用监控,因此需要保持其处于打开状态。为此,请单击如上所示的 Turn on monitoring 按钮。以下是针对 Elasticsearch 显示的详细信息 −

Since we are using the monitoring for the first time, we need to keep it ON. For this, click the button Turn on monitoring as shown above. Here are the details displayed for Elasticsearch −

turn on monitoring

它提供了 elasticsearch 的版本、可用的磁盘空间、添加到 elasticsearch 的索引、磁盘使用情况等。

It gives the version of elasticsearch, disk available, indices added to elasticsearch, disk usage etc.

此处显示了 Kibana 的监控详细信息 −

The monitoring details for Kibana are shown here −

monitoring details

它提供了请求及其最大响应时间,以及正在运行的实例和内存使用情况。

It gives the Requests and max response time for the request and also the instances running and memory usage.

Kibana - Creating Reports Using Kibana

可以通过使用 Kibana UI 中提供的“共享”按钮轻松创建报告。

Reports can be easily created by using the Share button available in Kibana UI.

Kibana 中的报告具有以下两种形式 −

Reports in Kibana are available in the following two forms −

  1. Permalinks

  2. CSV Report

在执行可视化时,您可以按以下方式共享 −

When performing visualization, you can share the same as follows −

report permalinks

使用“共享”按钮以嵌入代码或永久链接的形式与他人共享。

Use the share button to share the visualization with others as Embed Code or Permalinks.

在嵌入代码的情况下,您获得以下选项 −

In-case of Embed code you get the following options −

embed code

您可以为快照或已保存对象生成 iframe 代码作为短网址或长网址。快照不会提供最新数据,用户将能够查看在共享链接时保存的数据。稍后进行的任何更改将不会反映在内。

You can generate the iframe code as short url or long url for snapshot or saved object. Snapshot will not give the recent data and user will be able to see the data saved when the link was shared. Any changes done later will not be reflected.

在已保存对象的情况下,您将获得对该可视化所做的最新更改。

In case of saved object, you will get the recent changes done to that visualization.

针对长网址的快照 IFrame 代码 −

Snapshot IFrame code for long url −

<iframe src="http://localhost:5601/app/kibana#/visualize/edit/87af
cb60-165f-11e9-aaf1-3524d1f04792?embed=true&_g=()&_a=(filters:!(),linked:!f,query:(language:lucene,query:''),
uiState:(),vis:(aggs:!((enabled:!t,id:'1',params:(field:Area),schema:metric,type:max),(enabled:!t,id:'2',p
arams:(field:Country.keyword,missingBucket:!f,missingBucketLabel:Missing,order:desc,orderBy:'1',otherBucket:!
f,otherBucketLabel:Other,size:10),schema:segment,type:terms)),params:(addLegend:!t,addTimeMarker:!f,addToo
ltip:!t,categoryAxes:!((id:CategoryAxis-1,labels:(show:!t,truncate:100),position:bottom,scale:(type:linear),
show:!t,style:(),title:(),type:category)),grid:(categoryLines:!f,style:(color:%23eee)),legendPosition:right,
seriesParams:!((data:(id:'1',label:'Max+Area'),drawLi
nesBetweenPoints:!t,mode:stacked,show:true,showCircles:!t,type:histogram,valueAxis:ValueAxis-1)),times:!(),
type:histogram,valueAxes:!((id:ValueAxis-1,labels:(filter:!f,rotate:0,show:!t,truncate:100),name:LeftAxis-1,
position:left,scale:(mode:normal,type:linear),show:!t,style:(),title:(text:'Max+Area'),type:value))),title:
'countrywise_maxarea+',type:histogram))" height="600" width="800"></iframe>

针对短网址的快照 IFrame 代码 −

Snapshot Iframe code for short url −

<iframe src="http://localhost:5601/goto/f0a6c852daedcb6b4fa74cce8c2ff6c4?embed=true" height="600" width="800"><iframe>

如快照和快照网址所示。

As snapshot and shot url.

短网址使用 −

With Short url −

http://localhost:5601/goto/f0a6c852daedcb6b4fa74cce8c2ff6c4

关闭短网址,链接看起来如下 −

With Short url off, the link looks as below −

http://localhost:5601/app/kibana#/visualize/edit/87afcb60-165f-11e9-aaf1-3524d1f04792?_g=()&_a=(filters:!(
),linked:!f,query:(language:lucene,query:''),uiState:(),vis:(aggs:!((enabled:!t,id:'1',params:(field:Area),
schema:metric,type:max),(enabled:!t,id:'2',params:(field:Country.keyword,missingBucket:!f,missingBucketLabel:
Missing,order:desc,orderBy:'1',otherBucket:!f,otherBucketLabel:Other,size:10),schema:segment,type:terms)),
params:(addLegend:!t,addTimeMarker:!f,addTooltip:!t,categoryAxes:!((id:CategoryAxis-1,labels:(show:!t,trun
cate:100),position:bottom,scale:(type:linear),show:!t,style:(),title:(),type:category)),grid:(categoryLine
s:!f,style:(color:%23eee)),legendPosition:right,seriesParams:!((data:(id:'1',label:'Max%20Area'),drawLines
BetweenPoints:!t,mode:stacked,show:true,showCircles:!t,type:histogram,valueAxis:ValueAxis-1)),times:!(),
type:histogram,valueAxes:!((id:ValueAxis-1,labels:(filter:!f,rotate:0,show:!t,truncate:100),name:LeftAxis-1,
position:left,scale:(mode:normal,type:linear),show:!t,style:(),title:(text:'Max%20Area'),type:value))),title:'countrywise_maxarea%20',type:histogram))

当您在浏览器中点击上述链接时,您会获得与上文所示相同的可视化。上述链接是本地托管的,因此在本地环境外使用时不起作用。

When you hit the above link in the browser, you will get the same visualization as shown above. The above links are hosted locally, so it will not work when used outside the local environment.

CSV Report

您可以在 Kibana 中获取 CSV 报告,其中有数据,该数据通常位于“发现”选项卡中。

You can get CSV Report in Kibana where there is data, which is mostly in the Discover tab.

转到“发现”选项卡,然后获取您需要数据的任何索引。在此,我们取了索引:countriesdata-26.12.2018。以下是从索引显示的数据:

Go to Discover tab and take any index you want the data for. Here we have taken the index:countriesdata-26.12.2018. Here is the data displayed from the index −

csv report

您可以创建如下所示的表格数据:

You can create tabular data from above data as shown below −

csv tabular data

我们从“可用字段”中选择了字段,而之前看到的则转换为表格格式。

We have selected the fields from Available fields and the data seen earlier is converted into tabular format.

您可以通过如下所示方式获取CSV报告中的上述数据:

You can get above data in CSV report as shown below −

csv report shown

共享按钮有CSV报告和永久链接选项。您可以单击“CSV报告”并下载。

The share button has option for CSV report and permalinks. You can click on CSV Report and download the same.

请注意,要获取CSV报告,您需要保存数据。

Please note to get the CSV Reports you need to save your data.

csv save search

确认“保存”,然后单击“共享”按钮和CSV报告。您会看到如下显示:

Confirm Save and click on Share button and CSV Reports. You will get following display −

csv display

单击“生成CSV”以获取报告。完成后,它会指示您转到管理选项卡。

Click on Generate CSV to get your report. Once done, it will instruct you to go the management tab.

转到“管理”选项卡→报告

Go to Management Tab → Reporting

management tab

它显示报告名称、创建时间、状态和操作。您可以单击上面突出显示的下载按钮,获取CSV报告。

It displays the report name, created at, status and actions. You can click on the download button as highlighted above and get your csv report.

我们刚刚下载的CSV文件如下所示:

The CSV file we just downloaded is as shown here −

csv file downloaded