1、filebeat /var/log/secure
2、
filter { grok { #type => "syslog" match => ["message", "%{SYSLOGBASE} Failed password for (invalid user |)%{USERNAME:username} from %{IP:src_ip} port %{BASE10NUM:port} ssh2"] add_tag => "ssh_brute_force_attack" } grok { #type => "syslog" match => ["message", "%{SYSLOGBASE} Accepted password for %{USERNAME:username} from %{IP:src_ip} port %{BASE10NUM:port} ssh2"] add_tag => "ssh_sucessful_login" } geoip { source => "src_ip" target => "geoip" add_tag => [ "ssh-geoip" ] add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ] add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ] add_field => [ "geoipflag", "true" ] } }
Step:
.Change Kibana & elk order. Now elk import template_filebeat, then wait logstash put log to elk. elk can get index EX:filebeat-6.4.2-2018.11.19 filebeat-6.4.2-2018.11.20
Then kibana import index-partten and set default.
#!/bin/bash echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/main' >> /etc/apk/repositories echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/community' >> /etc/apk/repositories echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/testing' >> /etc/apk/repositories apk --no-cache upgrade apk --no-cache add curl echo "=====Elk config ========" until echo | nc -z -v elasticsearch 9200; do echo "Waiting Elk Kibana to start.
Fxxx kibana elk Now try to do again. But can’t get geo_point….
reindex no use
No Use
POST /_refresh
POST /_flush/synced
POST /_cache/clear
Only do this can apply
Wast time Fxxx system.
………………
………………
………………
………………
………………
………………
………………
………………
………………
………………
………………
………………
………………
………………
………………
very bad document, very bad change version…………Everythings is BAD for elk kibana
1、 Every time see this “PUT GET or DELETE” command. Use where ?
filter
json {
source => “message”
}
This mean is Try to use json format transfer log, then put some data to message filed. So some filed just be setting, and some data set to message.
.Use this to check mach and log
https://grokconstructor.appspot.com/do/match
https://blog.johnwu.cc/article/elk-logstash-grok-filter.html
https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns
This is geth log for example
A:
INFO [11-14|09:58:17.730] Generating DAG in progress epoch=1 percentage=99 elapsed=4m8.643s
INFO [11-15|01:41:33.455] Generating DAG in progress epoch=1 percentage=9 elapsed=27.
先建立index-pattern,匯出index-pattern json檔,然後刪除建立index-pattern後,再由rest api匯入。
1、顯示 index-pattern 列表 (先用web建立一個index-pattern)
curl http://localhost:5601/api/saved_objects/_find?type=index-pattern
2、匯出saved_objects index-pattern
curl http://localhost:5601/api/saved_objects/index-pattern/c0c02200-e6e0-11e8-b183-ebb59b02f871 > export.json
c0c02200-e6e0-11e8-b183-ebb59b02f871 是 1找到的id
json檔匯出後不可以直接用,必需頭尾補上
header補上:
{
“objects”: [
end補上:
]}
3、匯入saved_objects index-pattern (記得先砍了kibana-*)
curl -v -XPOST localhost:5601/api/kibana/dashboards/import?force=true -H ‘kbn-xsrf:true’ -H ‘Content-type:application/json’ -d @./export.json
json放在執行curl 同目錄就可以了
4、強制設定預設值 Kibana -> Managment -> Advanced Settings defaultIndex
curl -XPOST http://localhost:5601/api/kibana/settings/defaultIndex -H “kbn-xsrf: true” -H “Content-Type: application/json” -d ‘{“value”: “id”}’
id from export.json inside have id value
If already open kibana website, use Fresh (F5) page again.
https://www.rosehosting.com/blog/install-and-configure-the-elk-stack-on-ubuntu-16-04/
https://www.elastic.co/guide/en/logstash/current/configuration.html
https://dotblogs.com.tw/supershowwei/2016/05/25/185741
install finish
1、/etc/logstash/conf.d/ put some logstash conf
2、ubuntu have logstash listen error, so nano /etc/logstash/startup.options
LS_USER = root
3、/usr/share/logstash/bin# ./system-install reuse LS_USER for config
注意:
mutate {
add_field => {
“logTime” => “%{+YYYY-MM-dd} %{time}”
}