Now check again…. 1、template_filebeat.json Can only { "index_patterns": ["filebeat*"], "settings": { "number_of_shards": 1 }, "mappings": { "doc": { "properties": { "geoip.location": { "type": "geo_point" }, "geoip.coordinates": { "type": "geo_point" } } } } } Here Import: “location” is Error, Must “geoip.location” But sometime why no use, because my way insert index-pattern, so geoip.location no field, always is geoip.location.lat and geoip.location.lon overwrtie. see 2. 2、index-pattern index-pattern-export.json one way just try to put

繼續閱讀

elk ingest plugs pipeline

Filebeat + Elasticsearch + Kibana 轻量日志收集与展示系统 https://wzyboy.im/post/1111.html?utm_source=tuicool&utm_medium=referral 提到 beat -> logstash -> elk 可以 beat -> elk ingest plugs ( Elasticsearch Ingest Node ) Elasticsearch Ingest Node 是 Elasticsearch 5.0 起新增的功能。在 Ingest Node 出现之前,人们通常会在 ES 前置一个 Logstash Indexer,用于对数据进行预处理。有了 Ingest Node 之后,Logstash Indexer 的大部分功能就可以被它替代了,grok, geoip 等 Logstash 用户所熟悉的处理器,在 Ingest Node 里也有。对于数据量较小的 ES 用户来说,省掉一台 Logstash 的开销自然是令人开心的,对于数据量较大的 ES 用户来说,Ingest Node 和 Master Node, Data Node 一样也是可以分配独立节点并横向扩展的,也不用担心性能瓶颈。 目前 Ingest Node 已支持数十种处理器,其中的 script 处理器具有最大的灵活性。 与 /_template 类似,Ingest API 位于 /_ingest 下面。用户将 pipeline 定义提交之后,在 Beats 中即可指定某 pipeline 为数据预处理器。

繼續閱讀

logstash kibana ssh log

1、filebeat /var/log/secure 2、 filter { grok { #type => "syslog" match => ["message", "%{SYSLOGBASE} Failed password for (invalid user |)%{USERNAME:username} from %{IP:src_ip} port %{BASE10NUM:port} ssh2"] add_tag => "ssh_brute_force_attack" } grok { #type => "syslog" match => ["message", "%{SYSLOGBASE} Accepted password for %{USERNAME:username} from %{IP:src_ip} port %{BASE10NUM:port} ssh2"] add_tag => "ssh_sucessful_login" } geoip { source => "src_ip" target => "geoip" add_tag => [ "ssh-geoip" ] add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ] add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ] add_field => [ "geoipflag", "true" ] } }

繼續閱讀

Step: .Change Kibana & elk order. Now elk import template_filebeat, then wait logstash put log to elk. elk can get index EX:filebeat-6.4.2-2018.11.19 filebeat-6.4.2-2018.11.20 Then kibana import index-partten and set default. #!/bin/bash echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/main' >> /etc/apk/repositories echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/community' >> /etc/apk/repositories echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/testing' >> /etc/apk/repositories apk --no-cache upgrade apk --no-cache add curl echo "=====Elk config ========" until echo | nc -z -v elasticsearch 9200; do echo "Waiting Elk Kibana to start.

繼續閱讀

https://www.mobile01.com/topicdetail.php?f=18&t=5638571&p=1 E8372h-153 https://shopee.tw/-%E7%8F%BE%E8%B2%A8-%E5%8F%AF%E5%9B%9E%E5%BE%A9%E5%87%BA%E5%BB%A0%E5%80%BC-%E4%BF%9D%E5%9B%BA%E4%B8%80%E5%B9%B4%EF%BC%BD%E8%8F%AF%E7%82%BA-E8372h-153-4G-Wifi%E5%88%86%E4%BA%AB-E8372-E8372-153-i.24086409.308705863

繼續閱讀

Fxxx kibana elk Now try to do again. But can’t get geo_point…. reindex no use No Use POST /_refresh POST /_flush/synced POST /_cache/clear Only do this can apply Wast time Fxxx system. ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… very bad document, very bad change version…………Everythings is BAD for elk kibana 1、 Every time see this “PUT GET or DELETE” command. Use where ?

繼續閱讀

https://www.mobile01.com/topicdetail.php?f=291&t=5107288&p=1039#10381 聰明的投資者都是看財報,看法說會,一般網友說的會比較法說會及財報準嗎?我個人是不相信,所以我在追蹤一檔股票時,財報、法說會在我心目中才是第一,這才是專業的人給出的結果,這種結果可信度就非常的高

繼續閱讀

作者的圖片

Sue boy

Sueboy Can support You

CIO

Taiwan