1. Change logstash add fields or grok some fields. Kibana DISCOVER Table can see new fields & ! 2. Kibana Management -> Index Patterns can “Fefresh field list” ! will be disappear. 3. Logstash set some fields Type “geo_point”. see Kibana DISCOVER Table field Type still “TEXT”. Try to delete index. GET _cat/indices?v GET _cat/indices?v&s=index GET filebeat-6.5.1-2018.12.06 DELETE filebeat-6.5.1-2018.12.06 After DELETE index (real index), index rebuiled. Geo_point usually can see.

繼續閱讀

https://sukbeta.github.io/2018/11/21/ES-%E8%A7%A3%E5%86%B3-memory-lock-%E9%97%AE%E9%A2%98/ check http://xxx.xxx.xxx.xxx:9200/_nodes?filter_path=**.mlockall My way services: elasticsearch: build: context: elasticsearch/ volumes: - ./elasticsearch/esdata:/usr/share/elasticsearch/data:rw - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro ports: - "9200:9200" - "9300:9300" environment: - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 networks: - elk

繼續閱讀

Now check again…. 1、template_filebeat.json Can only { "index_patterns": ["filebeat*"], "settings": { "number_of_shards": 1 }, "mappings": { "doc": { "properties": { "geoip.location": { "type": "geo_point" }, "geoip.coordinates": { "type": "geo_point" } } } } } Here Import: “location” is Error, Must “geoip.location” But sometime why no use, because my way insert index-pattern, so geoip.location no field, always is geoip.location.lat and geoip.location.lon overwrtie. see 2. 2、index-pattern index-pattern-export.json one way just try to put

繼續閱讀

elk ingest plugs pipeline

Filebeat + Elasticsearch + Kibana 轻量日志收集与展示系统 https://wzyboy.im/post/1111.html?utm_source=tuicool&utm_medium=referral 提到 beat -> logstash -> elk 可以 beat -> elk ingest plugs ( Elasticsearch Ingest Node ) Elasticsearch Ingest Node 是 Elasticsearch 5.0 起新增的功能。在 Ingest Node 出现之前,人们通常会在 ES 前置一个 Logstash Indexer,用于对数据进行预处理。有了 Ingest Node 之后,Logstash Indexer 的大部分功能就可以被它替代了,grok, geoip 等 Logstash 用户所熟悉的处理器,在 Ingest Node 里也有。对于数据量较小的 ES 用户来说,省掉一台 Logstash 的开销自然是令人开心的,对于数据量较大的 ES 用户来说,Ingest Node 和 Master Node, Data Node 一样也是可以分配独立节点并横向扩展的,也不用担心性能瓶颈。 目前 Ingest Node 已支持数十种处理器,其中的 script 处理器具有最大的灵活性。 与 /_template 类似,Ingest API 位于 /_ingest 下面。用户将 pipeline 定义提交之后,在 Beats 中即可指定某 pipeline 为数据预处理器。

繼續閱讀

Step: .Change Kibana & elk order. Now elk import template_filebeat, then wait logstash put log to elk. elk can get index EX:filebeat-6.4.2-2018.11.19 filebeat-6.4.2-2018.11.20 Then kibana import index-partten and set default. #!/bin/bash echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/main' >> /etc/apk/repositories echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/community' >> /etc/apk/repositories echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/testing' >> /etc/apk/repositories apk --no-cache upgrade apk --no-cache add curl echo "=====Elk config ========" until echo | nc -z -v elasticsearch 9200; do echo "Waiting Elk Kibana to start.

繼續閱讀

Fxxx kibana elk Now try to do again. But can’t get geo_point…. reindex no use No Use POST /_refresh POST /_flush/synced POST /_cache/clear Only do this can apply Wast time Fxxx system. ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… very bad document, very bad change version…………Everythings is BAD for elk kibana 1、 Every time see this “PUT GET or DELETE” command. Use where ?

繼續閱讀

作者的圖片

Sue boy

Sueboy Can support You

CIO

Taiwan