all output columns with logstash filter { if [srctype] == "etl" { #[fields][srctype] csv { columns => [ "number", "hash", "parent_hash", "nonce", "sha3_uncles", "logs_bloom", "transactions_root", "state_root", "receipts_root", "timestamp", "extra_data", "transaction_count", "gas_limit", "size", "total_difficulty", "difficulty", "miner", "block_hash", "block_number", "transaction_index", "from_address", "to_address", "value", "gas", "gas_price", "input", "address", "bytecode", "function_sighashes", "is_erc20", "is_erc721", "log_index", "transaction_hash", "data", "topics", "cumulative_gas_used", "gas_used", "contract_address", "root,status" ] separator => "," remove_field => ["message"] #autodetect_column_names => true #have problems #autogenerate_column_names => true #have problems skip_empty_columns => true skip_empty_rows => true } }

繼續閱讀

Oragin geoip { source => "filebeatserverip" target => "filebeatserveripgeoip" add_field => [ "[filebeatserveripgeoip][coordinates]", "%{[filebeatserveripgeoip][longitude]}" ] add_field => [ "[filebeatserveripgeoip][coordinates]", "%{[filebeatserveripgeoip][latitude]}" ] } mutate { convert => ["[filebeatserveripgeoip][coordinates]", "float"] } Delete add_field => [ “[filebeatserveripgeoip][coordinates]”, “%{[filebeatserveripgeoip][longitude]}” ] add_field => [ “[filebeatserveripgeoip][coordinates]”, “%{[filebeatserveripgeoip][latitude]}” ] convert => ["[filebeatserveripgeoip][coordinates]", “float”] geoip { source => "filebeatserverip" target => "filebeatserveripgeoip" } mutate { } ===== { "index_patterns": ["filebeat*", "heartbeat*"], "settings": { "number_of_shards": 1 }, "mappings": { "doc": { "properties": { "filebeatserveripgeoip.

繼續閱讀

geth log No Year

geth log mined INFO [12-07|13:04:44] 🔨 mined potential block number=1934700 hash=3f9161…88da7d only month-day ……. grok { match => ["message", "%{LOGLEVEL:logType} \[%{DATA:gethmm}-%{DATA:gethdd}\|%{DATA:gethtime}\] %{GREEDYDATA:tmessage} number=(?\b\w+\b) hash=(?\b\w+...\w+\b)"] add_field => ["gethdate", "%{[gethmm]}-%{[gethdd]} %{[gethtime]}"] } ruby { code => " tstamp = event.get('@timestamp').to_i event.set('epoch',tstamp) event.set('gethdate', Time.at(tstamp).strftime('%Y')+'-'+event.get('gethdate')) " } date { match => [ "gethdate" , "YYYY-MM-dd HH:mm:ss"] target => "gethdate" timezone => "Asia/Taipei" } Recreate index GET _cat/indices?v GET _cat/indices?v&s=index GET filebeat-6.5.1-2018.12.06 DELETE filebeat-6.5.1-2018.12.06

繼續閱讀

Export index-pattern curl http://xxx.xxx.xxx.xxx:5601/api/saved_objects/index-pattern/f1836c20-e880-11e8-8d66-7d7b4c3a5906 > index-pattern-export.json visualization curl http://xxx.xxx.xxx.xxx:5601/api/saved_objects/visualization/1eb85311-f901-11e8-864c-bd4880954537 > visual-export.json Import index-pattern curl -v -XPOST kibana:5601/api/kibana/dashboards/import?force=true -H “kbn-xsrf:true” -H “Content-type:application/json” -d @/usr/share/config/config/index-pattern-export.json visualization curl -v -XPOST kibana:5601/api/kibana/dashboards/import?force=true -H “kbn-xsrf:true” -H “Content-type:application/json” -d @/usr/share/config/config/visual-export.json PS: visualization can be multi { "objects": [ {"id":"0c298010-f901-11e8-864c-bd4880954537",...}, {"id":"1eb85300-f901-11e8-864c-bd4880954537",...} ]}

繼續閱讀

1. Change logstash add fields or grok some fields. Kibana DISCOVER Table can see new fields & ! 2. Kibana Management -> Index Patterns can “Fefresh field list” ! will be disappear. 3. Logstash set some fields Type “geo_point”. see Kibana DISCOVER Table field Type still “TEXT”. Try to delete index. GET _cat/indices?v GET _cat/indices?v&s=index GET filebeat-6.5.1-2018.12.06 DELETE filebeat-6.5.1-2018.12.06 After DELETE index (real index), index rebuiled. Geo_point usually can see.

繼續閱讀

Now check again…. 1、template_filebeat.json Can only { "index_patterns": ["filebeat*"], "settings": { "number_of_shards": 1 }, "mappings": { "doc": { "properties": { "geoip.location": { "type": "geo_point" }, "geoip.coordinates": { "type": "geo_point" } } } } } Here Import: “location” is Error, Must “geoip.location” But sometime why no use, because my way insert index-pattern, so geoip.location no field, always is geoip.location.lat and geoip.location.lon overwrtie. see 2. 2、index-pattern index-pattern-export.json one way just try to put

繼續閱讀

elk ingest plugs pipeline

Filebeat + Elasticsearch + Kibana 轻量日志收集与展示系统 https://wzyboy.im/post/1111.html?utm_source=tuicool&utm_medium=referral 提到 beat -> logstash -> elk 可以 beat -> elk ingest plugs ( Elasticsearch Ingest Node ) Elasticsearch Ingest Node 是 Elasticsearch 5.0 起新增的功能。在 Ingest Node 出现之前,人们通常会在 ES 前置一个 Logstash Indexer,用于对数据进行预处理。有了 Ingest Node 之后,Logstash Indexer 的大部分功能就可以被它替代了,grok, geoip 等 Logstash 用户所熟悉的处理器,在 Ingest Node 里也有。对于数据量较小的 ES 用户来说,省掉一台 Logstash 的开销自然是令人开心的,对于数据量较大的 ES 用户来说,Ingest Node 和 Master Node, Data Node 一样也是可以分配独立节点并横向扩展的,也不用担心性能瓶颈。 目前 Ingest Node 已支持数十种处理器,其中的 script 处理器具有最大的灵活性。 与 /_template 类似,Ingest API 位于 /_ingest 下面。用户将 pipeline 定义提交之后,在 Beats 中即可指定某 pipeline 为数据预处理器。

繼續閱讀

作者的圖片

Sue boy

Sueboy Can support You

CIO

Taiwan