使用grafana/elasticsearch的groupby时间戳的总和:值始终为0

pcww981p  于 2023-06-05  发布在  ElasticSearch
关注(0)|答案(2)|浏览(121)

我有这样的数据:

datetime    resolution_code area_code   area_type_code  area_name   map_code    total_load_value    update_time
"2014-12-31 23:15:00"   "PT15M" "10Y1001A1001A83F"  "CTY"   "DE CTY"    "DE"    44474.91
"2014-12-31 23:00:00"   "PT15M" "10Y1001A1001A83F"  "CTY"   "DE CTY"    "DE"    44839.67
"2014-12-31 23:45:00"   "PT15M" "10Y1001A1001A83F"  "CTY"   "DE CTY"    "DE"    44507.17
"2014-12-23 07:00:00"   "PT60M" "10YES-REE------0"  "CTY"   "ES CTY"    "ES"    31164

我将这些数据主要存储在PostgreSQL上,然后通过logstash将数据推送到Elasticsearch中。

input {
  jdbc {
     jdbc_driver_library => "path/postgresql-42.6.0.jar"
     jdbc_connection_string => "jdbc:postgresql://localhost:5432/european_energy_data"
     jdbc_user => "user"
     jdbc_password => "password"
     jdbc_driver_class => "org.postgresql.Driver"
     statement => "SELECT * from public.total_load LIMIT 10000"
 }
}

filter {
    mutate {
        convert => { "date_time" => "string" }
        convert => { "total_load_value" => "float"}
    }
      date {
        match => ["date_time", "YYYY-MM-dd'T'HH:mm:ss'.'SSS'Z'"]
        target => "@datetime"
      }
    }

output {
    stdout {}
    elasticsearch { 
    hosts => ["localhost:9200"] 
    index => "network"
    }
}

我在ElasticSearch上找到了数据。我希望total_load_value按小时或天显示。
使用CURL查询,一切都很好:

curl -X POST "localhost:9200/network/_search?&pretty" -H 'Content-Type: application/json' -d'
{
  "query": {
    "query_string": {
      "query": "_index:'network'"
    }
  },
  "aggs": {
    "load_by_year": {
        "date_histogram": {
            "field": "date_time",
            "calendar_interval": "day"
        },
    "aggs" : {
    "total_load_value": {
      "sum": {
        "field": "total_load_value"
      }
    }
   }
  }
 }
}
'

但是,不可能通过grafana得到这一个。如果我试图通过对我的值求和来聚合,所有值都被置为0。
编辑:正如@jan-garaj所指出的,我对grafana的截图是错误的,因为我用@timestamp而不是date_time分组。所以我就把它们拿掉了。

klr1opcd

klr1opcd1#

您在Grafana @timestamp中使用了错误的时间字段。看起来你应该使用@datetime(也许是day_time)。
尝试在Grafana数据源配置中正确配置索引的时间字段。

qlfbtfca

qlfbtfca2#

我成功地在管道. conf中进行了一些小的更改。主要的变化是删除了LIMIT来上传我的所有表。这是我犯的一个愚蠢的错误。显然我已经精疲力尽了。此外,通过几次测试,我注意到在SQL注入过程中,当页面大小太重要时,我丢失了一些数据,并且我错过了ORDER BY SQL命令。
我并没有探究更多的原因。有时间我会剪辑的。

input {
  jdbc {
     jdbc_driver_library => "path/postgresql-42.6.0.jar"
     jdbc_connection_string => "jdbc:postgresql://localhost:5432/european_energy_data"
     jdbc_user => "user"
     jdbc_password => "password"
     jdbc_driver_class => "org.postgresql.Driver"
     jdbc_paging_enabled => true # to avoid memory crash
     jdbc_page_size => 100000 # size of the page not too big
     statement => "SELECT * from public.total_load ORDER BY date_time ASC" # seems important to add ORDER BY to get complete injection.
 }
}

# filter not needed anymore, date_time was already recognized as ISO timestamp. 

output {
    # stdout {}
    elasticsearch {
    hosts => ["localhost:9200"]
    index => "network"
    }
}

相关问题