我们选择下载tar.gz文件
点我下载logstash-7.8.0
下载完解压:(解压路径随意,解压到那里就在那里启动logstash,我们这里选择/opt)
tar -zxvf logstash-7.8.0.tar.gz
注意:因为logstash是需要依赖JAVA_HOME环境变量的,所以我们需要安装java
安装前请自行查看java是否已经存在:java -version
点我下载java8
创建目录,在/usr/目录下创建java目录mkdir /usr/java cd /usr/java
解压 JDK把下载的文件 jdk-8u251-linux-x64.tar.gz 放在/usr/java/目录下,
tar -zxvf jdk-8u251-linux-x64.tar.gz
设置环境变量vim /etc/profile
在 profile 文件中添加如下内容并保存:
#set java environment JAVA_HOME=/usr/java/jdk1.8.0_251 JRE_HOME=/usr/java/jdk1.8.0_251/jre CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin export JAVA_HOME JRE_HOME CLASS_PATH PATH注意:其中 JAVA_HOME, JRE_HOME 请根据自己的实际安装路径及 JDK 版本配置。
刷新环境变量source /etc/profile
查看java版本java -version
[root@PCNCMCNSA0036 java]# java -version java version "1.8.0_251" Java(TM) SE Runtime Environment (build 1.8.0_251-b08) Java HotSpot(TM) 64-Bit Server VM (build 25.251-b08, mixed mode)这里采用配置文件的方式进行测试
在logstash的配置文件中新建测试配置文件test.conf
cd /opt/logstash-7.8.0/config
vim test.conf
input{ stdin{} } output{ stdout{codec=>"rubydebug"} } 以配置文件的方式启动logstash [root@PCNCMCNSA0036 logstash-7.8.0]# ./bin/logstash -f config/test.conf Sending Logstash logs to /opt/logstash-7.8.0/logs which is now configured via log4j2.properties [2020-07-02T07:20:04,295][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash-7.8.0/data/queue"} [2020-07-02T07:20:04,613][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/logstash-7.8.0/data/dead_letter_queue"} [2020-07-02T07:20:06,067][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified [2020-07-02T07:20:06,133][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.8.0", "jruby.version"=>"jruby 9.2.11.1 (2.5.7) 2020-03-25 b1f55b1a40 Java HotSpot(TM) 64-Bit Server VM 25.251-b08 on 1.8.0_251-b08 +indy +jit [linux-x86_64]"} [2020-07-02T07:20:06,342][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"29c8a3c8-68bc-4fe4-84ab-a9317af20c12", :path=>"/opt/logstash-7.8.0/data/uuid"} [2020-07-02T07:20:13,143][INFO ][org.reflections.Reflections] Reflections took 113 ms to scan 1 urls, producing 21 keys and 41 values [2020-07-02T07:20:16,508][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/opt/logstash-7.8.0/config/test.conf"], :thread=>"#<Thread:0x3c645c99 run>"} [2020-07-02T07:20:19,268][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"} The stdin plugin is now waiting for input: [2020-07-02T07:20:19,463][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2020-07-02T07:20:20,227][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}注意:启动需要等待一会。请耐心等待并直至出现Successfully字段信息
然后我们输入任意字符串,会返回给我们日志的结构。
测试完,直接ctrl+c结束即可。
点我查看github插件项目
可以看到文档中有下面的安装命令:
bin/plugin install logstash-output-documentdb # or bin/logstash-plugin install logstash-output-documentdb (Newer versions of Logstash) 安装插件: [root@PCNCMCNSA0036 logstash-7.8.0]# bin/logstash-plugin install logstash-output-documentdb Validating logstash-output-documentdb Installing logstash-output-documentdb Installation successful [root@PCNCMCNSA0036 logstash-7.8.0]# 查看插件是否安装成功 [root@PCNCMCNSA0036 logstash-7.8.0]# bin/logstash-plugin list注意:需要等待一会,然后查看是否有我们刚才安装的logstash-output-documentdb
插件的安装路径logstash-7.8.0/vendor/bundle/jruby/2.5.0/gems
插件的用法 Usage: bin/logstash-plugin [OPTIONS] SUBCOMMAND [ARG] ... Parameters: SUBCOMMAND subcommand [ARG] ... subcommand arguments Subcommands: install Install a plugin uninstall Uninstall a plugin update Install a plugin list List all installed plugins Options: -h, --help print help 配置conf文件下面是官方文档的说明:
input{ stdin{} } output { documentdb { docdb_endpoint => "https://<YOUR ACCOUNT>.documents.azure.com:443/" docdb_account_key => "<ACCOUNT KEY>" docdb_database => "<DATABASE NAME>" docdb_collection => "<COLLECTION NAME>" auto_create_database => true|false auto_create_collection => true|false partitioned_collection => true|false partition_key => "<PARTITIONED KEY NAME>" offer_throughput => <THROUGHPUT NUM> } }docdb_endpoint(必需)-Azure DocumentDB帐户终结点URI
docdb_account_key(必需)-Azure DocumentDB帐户密钥(主密钥)。您不得设置只读密钥
docdb_database(必需) -DocumentDB数据库名称b
docdb_collection(必需)-DocumentDB集合名称
auto_create_database(可选) -默认值:true。默认情况下,如果不存在名为docdb_database的 DocumentDB数据库,它将自动创建
auto_create_collection(可选) -默认值:true。默认情况下,如果不存在名为docdb_collection的 DocumentDB集合,它将自动创建
partitioned_collection(可选) -默认值:false。如果要创建记录和/或将记录存储到分区集合中,请设置true。为单分区集合设置为false
partition_key(可选) -默认值:无。必须为分区集合指定分区键(partitioned_collection设置为true)
offer_throughput(可选) -默认值:10100。收集吞吐量以每秒100个请求单位的单位表示。这仅在新创建分区集合时才有效(即,auto_create_collection和partitioned_collection都设置为true)
跟新建test.conf文件一样,我们新建一个名为cosmos.conf的配置文件
input{ stdin{} } output { documentdb { docdb_endpoint => "https://<这里根据azure portal上所述配置>.documents.azure.cn:443/" docdb_account_key => "<这里请根据自己的MASTERKEY配置>" docdb_database => "saltMicroLog" docdb_collection => "appLogs" auto_create_database => true auto_create_collection => true } # for debug stdout { codec => rubydebug } } 测试logstash-output-documentdb插件采集日志到cosmosDB中 [root@PCNCMCNSA0036 logstash-7.8.0]# ./bin/logstash -f config/cosmos.conf Sending Logstash logs to /opt/logstash-7.8.0/logs which is now configured via log4j2.properties [2020-07-02T08:33:29,929][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified [2020-07-02T08:33:30,295][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.8.0", "jruby.version"=>"jruby 9.2.11.1 (2.5.7) 2020-03-25 b1f55b1a40 Java HotSpot(TM) 64-Bit Server VM 25.251-b08 on 1.8.0_251-b08 +indy +jit [linux-x86_64]"} [2020-07-02T08:33:35,579][INFO ][org.reflections.Reflections] Reflections took 126 ms to scan 1 urls, producing 21 keys and 41 values [2020-07-02T08:33:40,636][INFO ][logstash.outputs.documentdb] Using version 0.1.x output plugin 'documentdb'. This plugin isn't well supported by the community and likely has no maintainer. [2020-07-02T08:33:51,567][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/opt/logstash-7.8.0/config/cosmos.conf"], :thread=>"#<Thread:0x5ec6b5b3 run>"} [2020-07-02T08:33:54,346][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"} The stdin plugin is now waiting for input: [2020-07-02T08:33:54,543][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2020-07-02T08:33:55,293][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} hello comsmosDB /opt/logstash-7.8.0/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated { "host" => "PCNCMCNSA0036", "message" => "hello comsmosDB", "@version" => "1", "@timestamp" => 2020-07-02T08:36:27.260Z }进入到azure portal查看日志采集的结果。
配置filter之grok 配置filter之mutate
[root@tempcentosvmsa001 bin]# ./logstash-plugin prepare-offline-pack logstash-output-documentdb Offline package created at: /usr/share/logstash/logstash-offline-plugins-7.8.0.zip
You can install it with this command `bin/logstash-plugin install file:///usr/share/logstash/logstash-offline-plugins-7.8.0.zip`
[root@tempcentosvmsa001 bin]# cd .. [root@tempcentosvmsa001 logstash]# ls bin Gemfile logstash-core NOTICE.TXT config Gemfile.lock logstash-core-plugin-api tools CONTRIBUTORS lib logstash-offline-plugins-7.8.0.zip vendor data LICENSE.txt modules x-pack [root@tempcentosvmsa001 logstash]#
./bin/logstash-plugin install logstash-offline-plugins-*.zip
适用于Microsoft Azure服务的Logstash插件