我如何正确配置opensearch作为logstash输出,我得到一个主机不可达的错误



所以我目前连接到一个容器托管的kubernetes集群。在那里,我运行kafka和opensearch/opensearch仪表板部署。我正在尝试运行logstash,这样我就可以从kafka主题到opensearch获得数据,https://hub.docker.com/r/opensearchproject/logstash-oss-with-opensearch-output-plugin这是我使用logstash (https://justpaste)的图像。它/47676(这是我的logstash配置)。下面是我的opensearch配置https://justpaste.it/a090p当我部署logstash时,我成功地从kafka主题中获取数据,所以我的输入插件按预期工作,但输出不是,我未能从logstash输出数据到opensearch。以下是来自logstash pod的日志:https://justpaste.it/620g4 .

这是"kubectl get services">

的输出
NAME                                  TYPE           CLUSTER-IP       EXTERNAL-IP   
PORT(S)                               AGE
dashboards-opensearch-dashboards      ClusterIP      10.96.114.252    <none>        5601/TCP                              5d20h
grafana                               ClusterIP      10.107.83.28     <none>        3000/TCP                              44h
logstash-service                      LoadBalancer   10.102.132.114   <pending>     5044:31333/TCP                        28m
loki                                  ClusterIP      10.99.30.246     <none>        3100/TCP                              43h
loki-headless                         ClusterIP      None             <none>        3100/TCP                              43h
my-cluster-kafka-0                    NodePort       10.101.196.50    <none>        9094:32000/TCP                        53m
my-cluster-kafka-1                    NodePort       10.96.247.75     <none>        9094:32001/TCP                        53m
my-cluster-kafka-2                    NodePort       10.98.203.5      <none>        9094:32002/TCP                        53m
my-cluster-kafka-bootstrap            ClusterIP      10.111.178.24    <none>        9091/TCP,9092/TCP,9093/TCP            53m
my-cluster-kafka-brokers              ClusterIP      None             <none>        9090/TCP,9091/TCP,9092/TCP,9093/TCP   53m
my-cluster-kafka-external-bootstrap   NodePort       10.109.134.74    <none>        9094:32100/TCP                        53m
my-cluster-zookeeper-client           ClusterIP      10.98.157.173    <none>        2181/TCP                              54m
my-cluster-zookeeper-nodes            ClusterIP      None             <none>        2181/TCP,2888/TCP,3888/TCP            54m
opensearch-cluster-master             ClusterIP      10.98.55.121     <none>        9200/TCP,9300/TCP                     19h
opensearch-cluster-master-headless    ClusterIP      None             <none>        9200/TCP,9300/TCP                     19h
prometheus-operated                   ClusterIP      None             <none>        9090/TCP                              25m
prometheus-operator                   ClusterIP      None             <none>        8080/TCP                              50m

我做错了什么,我如何建立这个连接?

我明白了。我认为它正在等待ssl证书,这就是它拒绝连接的原因。我"固定"的方式这(因为我现在不需要这个项目的ssl认证)是我用这种方式更改了logstash配置。

logstash.conf: |
input {
kafka{
codec => json
bootstrap_servers => "10.111.178.24:9092"
topics => ["t_events"]
}
}
output {
opensearch {
hosts       => ["https://10.102.102.109:9200"]
ssl_certificate_verification => false
user        => "admin"
password    => "admin"
index       => "logstash-logs-%{+YYYY.MM.dd}"
}
}

所以我添加了"ssl_certificate_verification =>false"行到配置,这使我能够从logstash连接到opensearch并发送数据。现在我有数据加密方面通过使用https协议,但我缺乏ssl认证,我很好与这个项目。

相关内容

  • 没有找到相关文章

最新更新