修改yml配置的注意事项:
每行必须顶格,不能有空格
“:”后面必须有一个空格
# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster 每台机器cluster.name保持一致----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: my-es # # ------------------------------------ Node节点名称,每台机器不一致 ------------------------------------ # # Use a descriptive name for the node: # node.name: node-102 # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # #path.data: /path/to/data # # Path to log files: # #path.logs: /path/to/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # 把bootstrap自检程序关掉 bootstrap.memory_lock: false bootstrap.system_call_filter: false # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): #网络部分 改为当前的ip地址 ,端口号保持默认9200就行 network.host: hadoop102 # # Set a custom port for HTTP: # http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] #自发现配置:新节点向集群报到的主机名 discovery.seed_hosts: ["hadoop102","hadoop103","hadoop104","hadoop105","hadoop106"] # # Bootstrap the cluster using an initial set of master-eligible nodes: #7版本新增配置项,后续进一步补充 cluster.initial_master_nodes: ["node-102","node-103","node-104","node-105","node-106"] # # For more information, consult the discovery and cluster formation module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true #以上修改完成一台服务器
接下来是分发文件到其他4台服务器
[deploy@hadoop102 module]$xsync elasticsearch-7.7.1 同时,分别修改hadoop103、hadoop104、hadoop105、hadoop106对应配置文件的node.name: node-102、network.host: hadoop102为什么要修改linux配置?
默认elasticsearch是单机访问模式,就是只能自己访问自己。
但是我们之后一定会设置成允许应用服务器通过网络方式访问。这时,elasticsearch就会因为嫌弃单机版的低端默认配置而报错,甚至无法启动。
所以我们在这里就要把服务器的一些限制打开,能支持更多并发。
问题1:max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536] elasticsearch原因:系统允许 Elasticsearch 打开的最大文件数需要修改成65536 解决:vi /etc/security/limits.conf 添加内容: * soft nofile 65536 * hard nofile 131072 * soft nproc 2048 * hard nproc 65536 注意:“*” 不要省略掉 分发文件 xsync /etc/security/limits.conf 问题2:max number of threads [1024] for user [judy2] likely too low, increase to at least [2048] 原因:允许最大进程数修该成4096 解决:vi /etc/security/limits.d/90-nproc.conf 修改如下内容: * soft nproc 1024 #修改为 * soft nproc 4096 分发文件 xsync /etc/security/limits.d/90-nproc.conf
问题3:max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
原因:一个进程可以拥有的虚拟内存区域的数量。 解决: 在 /etc/sysctl.conf 文件最后添加一行 vm.max_map_count=262144 即可永久修改 分发文件 xsync /etc/sysctl.conf上传到/home/deploy/bin
赋予777权限
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://hadoop102:9200"]
# Kibana is served by a back end server. This setting specifies the port to use. #server.port: 5601 # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values. # The default is 'localhost', which usually means remote machines will not be able to connect. # To allow connections from remote users, set this parameter to a non-loopback address. server.host: "0.0.0.0" # Enables you to specify a path to mount Kibana at if you are running behind a proxy. # Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath # from requests it receives, and to prevent a deprecation warning at startup. # This setting cannot end in a slash. #server.basePath: "" # Specifies whether Kibana should rewrite requests that are prefixed with # `server.basePath` or require that they are rewritten by your reverse proxy. # This setting was effectively always `false` before Kibana 6.3 and will # default to `true` starting in Kibana 7.0. #server.rewriteBasePath: false # The maximum payload size in bytes for incoming server requests. #server.maxPayloadBytes: 1048576 # The Kibana server's name. This is used for display purposes. #server.name: "hadoop102" # The URLs of the Elasticsearch instances to use for all your queries. elasticsearch.hosts: ["http://hadoop102:9200"] # When this setting's value is true Kibana uses the hostname specified in the server.host # setting. When the value of this setting is false, Kibana uses the hostname of the host # that connects to this Kibana instance. #elasticsearch.preserveHost: true # Kibana uses an index in Elasticsearch to store saved searches, visualizations and # dashboards. Kibana creates a new index if the index doesn't already exist. #kibana.index: ".kibana" # The default application to load. #kibana.defaultAppId: "home" # If your Elasticsearch is protected with basic authentication, these settings provide # the username and password that the Kibana server uses to perform maintenance on the Kibana # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which # is proxied through the Kibana server. #elasticsearch.username: "kibana" #elasticsearch.password: "pass" # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively. # These settings enable SSL for outgoing requests from the Kibana server to the browser. #server.ssl.enabled: false #server.ssl.certificate: /path/to/your/server.crt #server.ssl.key: /path/to/your/server.key # Optional settings that provide the paths to the PEM-format SSL certificate and key files. # These files are used to verify the identity of Kibana to Elasticsearch and are required when # xpack.security.http.ssl.client_authentication in Elasticsearch is set to required. #elasticsearch.ssl.certificate: /path/to/your/client.crt #elasticsearch.ssl.key: /path/to/your/client.key # Optional setting that enables you to specify a path to the PEM file for the certificate # authority for your Elasticsearch instance. #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ] # To disregard the validity of SSL certificates, change this setting's value to 'none'. #elasticsearch.ssl.verificationMode: full # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of # the elasticsearch.requestTimeout setting. #elasticsearch.pingTimeout: 1500 # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value # must be a positive integer. #elasticsearch.requestTimeout: 30000 # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side # headers, set this value to [] (an empty list). #elasticsearch.requestHeadersWhitelist: [ authorization ] # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten # by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration. #elasticsearch.customHeaders: {} # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable. #elasticsearch.shardTimeout: 30000 # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying. #elasticsearch.startupTimeout: 5000 # Logs queries sent to Elasticsearch. Requires logging.verbose set to true. #elasticsearch.logQueries: false # Specifies the path where Kibana creates the process ID file. #pid.file: /var/run/kibana.pid # Enables you specify a file where Kibana stores log output. #logging.dest: stdout # Set the value of this setting to true to suppress all logging output. #logging.silent: false # Set the value of this setting to true to suppress all logging output other than error messages. #logging.quiet: false # Set the value of this setting to true to log all events, including system usage information # and all requests. #logging.verbose: false # Set the interval in milliseconds to sample system and process performance # metrics. Minimum is 100ms. Defaults to 5000. #ops.interval: 5000 # Specifies locale to be used for all localizable strings, dates and number formats. # Supported languages are the following: English - en , by default , Chinese - zh-CN . #i18n.locale: "en"浏览器访问:http://hadoop102:5601/,登录成功页面