https://github.com/appbaseio/dejavu
geth新版的geth.log 會產生豪秒
舊 INFO [04-22|16:29:56]
新 INFO [03-28|13:43:35.004]
差別在.004
看log會發現只有logstash的warning,經同事正確解釋是:
>>因為logstash轉換出來的日期格式2019-03-28 13:43:35.004,Elasticsearch不認得,所以對logstash來說是Warning,但對Elasticsearch是Error,造成Log寫不進去~
所以在logstash.conf上必須在解析date上,補上 “YYYY-MM-dd HH:mm:ss,SSS”, “YYYY-MM-dd HH:mm:ss.SSS”
date { match => \[ "gethdate" , "YYYY-MM-dd HH:mm:ss", "YYYY-MM-dd HH:mm:ss,SSS", "YYYY-MM-dd HH:mm:ss.SSS"\] target => "gethdate" timezone => "Asia/Taipei" } 這樣就可以正常匯入geth.log了
docker-compose version: '3.3' services: elasticsearch: build: context: elasticsearch/ volumes: #- ./elasticsearch/esdata:/usr/share/elasticsearch/data:rw - alldata:/usr/share/elasticsearch/data:rw #- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro ports: - "9200:9200" - "9300:9300" environment: - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 networks: - fastdev logstash: build: context: logstash/ volumes: - ./logstash/config/logstash.yml:/etc/logstash/logstash.yml:ro - ./logstash/pipeline:/etc/logstash/conf.d:ro ports: - "5000:5000" - "5044:5044" environment: LS_JAVA_OPTS: "-Xmx256m -Xms256m" networks: - fastdev depends_on: - elasticsearch kibana: build: context: kibana/ volumes: - .
ethereum-etl export data every time write over file. Filebeat always reload file then ELK receive repeat records…
So
.env STARTBLOCK=01205866 ENDBLOCK=01205888 startetl.sh IP_PORT is go-ethereum-node1 outside ip & port. here is docker-machine
source get file that path need to be careful
#!/bin/bash IP_PORT=192.168.99.100:18545 ETH_METHOD=eth_blockNumber BLOCKNUMBER_JSON_HEX=$(curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"'$ETH_METHOD'","params":[],"id":1}' $IP_PORT | jq '.result' | tr -d '"') BLOCKNUMBER_DEX=$(printf "%08d\n" $BLOCKNUMBER_JSON_HEX) printf "\n===== Now Geth BlockerNumber =====\n" printf "HEX: %s\n" $BLOCKNUMBER_JSON_HEX printf "DEC: %s\n" $BLOCKNUMBER_DEX source .
{"type":"log","@timestamp":"2019-01-21T08:57:51Z","tags":["status","plugin:elasticsearch@6.5.2","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Request Timeout after 3000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} 1. First use oss
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:6.5.2
FROM docker.elastic.co/kibana/kibana-oss:6.5.2
So Not X-pack problem
2. Truly Problem is connect to elasticsearch failed
Even login kibana docker then ping elasticsearch docker that get response. But kibana logs always get this error message.
kibana website message “Kibana server is not ready yet”
Try to get docker elasticsearch ip, change docker kibana.
https://blog.csdn.net/qq_38486203/article/details/80817037
Search minedNumber
GET /filebeat-6.*-geth*/_search?q=geth_ip:xxx.xxx.xxx.xxx { "_source": ["name", "minedNumber", "gethdate"], "sort": [ { "gethdate": { "order": "desc" } } ], "from": 1, "size": 1 } Get minedNumber
curl -XGET "http://xxx.xxx.xxx.xxx:9200/filebeat-6.*-geth*/_search?q=geth_ip:xxx.xxx.xxx.xxx" -H 'Content-Type: application/json' -d' { "_source": ["name", "minedNumber", "gethdate"], "sort": [ { "gethdate": { "order": "desc" } } ], "from": 1, "size": 1 }' | jq ".hits.hits[]._source.minedNumber"