Graylog stand-alone upgrade cluster

Overview

There may be resource shortages in the early stage, Graylog is deployed in a stand-alone form, and it needs to be upgraded later when there are rich resources. During the upgrade process, there are mainly Elasticsearch stand-alone upgrade cluster problems. mongoDB and Graylog have fewer upgrade issues.

Server description

The following configuration is performed according to the following server, and the production environment needs to be replaced by the corresponding server

IP Function Remarks
10.0.107.55 elasticsearch, mongodb, graylog original nodes There are historical data, need Backup
10.0.204.70 elasticsearch, mongodb, graylog add node 1 Newly added machine
10.0.169.95 elasticsearch, mongodb, graylog add node 2 newly added machine

MongoDB stand-alone upgrade to a replica set

Data backup

Backup Content File/Folder Location Need Backup Operation
mongoDB data /var/lib/mongo is cp -r /var /lib/mongo /var/lib/mongo.bak
mongoDB configuration file /etc/mongod.conf is cp /etc/mongod.conf /etc/mongod.conf.bak
log file /var/log/ mongodb/mongod.log
pid file /var/run/mongodb/mongod .pid

Notice:
Pay attention to the disk space before backup, and pay attention to the group relationship of files.
You also need to pay attention to file permissions when backing up and restoring.

Modify configuration

Generate access.keyfile

On the master node [any node can also] use OpenSSL to generate the authorization file

#!/bin/bash
#################################################### ######
# Script description: mongo cluster configuration
#################################################### #####

# generate access.keyfile
openssl rand -base64 756 > /var/lib/mongo/access.keyfile

# modify group
chown mongod:mongod /var/lib/mongo/access.keyfile

# Set file permissions
chmod 600 /var/lib/mongo/access.keyfile

Copy files to this machine on the other two nodes, and configure file permissions and group information

# file copy
scp [email protected]:/var/lib/mongo/access.keyfile /var/lib/mongo/

# modify group
chown mongod:mongod /var/lib/mongo/access.keyfile

# Set file permissions
chmod 600 /var/lib/mongo/access.keyfile

Modify the /etc/mongod.conf configuration file

Mainly modify net, replication, security, other configurations do not change

net:
  port: 27017
  # Local ip (multiple network card addresses can be bound, separated by commas, 127.0.0.1 is used for local login, 10.0.107.55 is used for cluster configuration)
  bindIp: 127.0.0.1,10.0.107.55
replication:
# cluster name
  replSetName: graylog-rs
security:
# Authorization certificate file
  keyFile: /var/lib/mongo/access.keyfile

image.png

Configure cluster

Log in MongoDB locally to initialize the cluster configuration (only on the master node)
To add a new node, you need to install and start MongoDB. After adding a node, the data will be automatically synchronized to the new node.

# Enter the MongoDB console
mongo

# create admin library
use admin

# Initialize the cluster return { "ok" : 1 }
rs.initiate( {<!-- -->
   _id : "graylog-rs",
   members: [
      {<!-- --> _id: 0, host: "10.0.107.55:27017" },
      {<!-- --> _id: 1, host: "10.0.204.70:27017" }
   ]
})

# Set slave nodes to be readable
# rs.slaveOk(); has been deprecated
rs.secondaryOk();
# View status After the cluster is initialized successfully, graylog-rs:SECONDARY> will be displayed by default
graylog-rs:SECONDARY> rs.status()

Create a library and set a password

Configured on the master node

# enter the console
mongo

#Modify the admin user password
use admin
db.createUser({<!-- -->user: "admin", pwd: "admin123graylog", roles: ["root"]})
# authorization
db.auth("admin","admin123graylog")

# You can change the password, but you must log in
db.changeUserPassword('admin','admin123graylog')

#Create graylog database and set password
use graylog
db.createUser({<!-- -->
   user: "graylog",
   pwd: "admin123graylog",
  "roles" : [{<!-- -->
      "role" : "dbOwner",
      "db" : "graylog"
    }, {<!-- -->
      "role" : "readWrite",
      "db" : "graylog"
    }]
})

Verify

Log in to the slave node to check whether the data exists

# enter the console
mongo


use graylog
# authorization
db.auth("graylog","admin123graylog")

# view data;
db.index_sets.find();

graylog-rs:SECONDARY> db.index_sets.find();
{<!-- --> "_id" : ObjectId("6369f7c9e77f9c348b8d3693"), "title" : "Default index set", "description" : "The Graylog default index set ", "regular" : true, "index_prefix" : "graylog", "shards" : 4, "replicas" : 0, "rotation_strategy_class" : "org.graylog2 .indexer.rotation.strategies.MessageCountRotationStrategy", "rotation_strategy" : {<!-- --> "type" : "org.graylog2.indexer.rotation.strategies.MessageCountRotationStrategyConfig", "max_docs_per_index " : 20000000 }, "retention_strategy_class" : "org.graylog2.indexer.retention.strategies.DeletionRetentionStrategy", "retention_strategy" : {<!-- --> "type" : " org.graylog2.indexer.retention.strategies.DeletionRetentionStrategyConfig", "max_number_of_indices" : 20 }, "creation_date" : ISODate("2022-11-08T06:31:37.982Z"), "index_analyzer " : "standard", "index_template_name" : "graylog-internal", "index_template_type" : null, "index_optimization_max_num_segments" : 1, "index_optimize tion_disabled" : false, "field_type_refresh_interval" : NumberLong(5000), "writable" : true }?…

ElasticSearch stand-alone upgrade cluster

  • Use the original node as the master node, configure the cluster, delete the data at the beginning of the manifest in the /esdata/nodes/0/_state directory (delete or rename), and start the master node
  • Start other nodes and wait for synchronization
  • Query data through other nodes.
  • Subsequent solutions are consistent with cluster deployment.

Data backup

Backup Content File/Folder Location Need Backup Operation
es data log /data/elasticsearch yes cp -r /data/ elasticsearch /data/elasticsearch.bak
es configuration file /etc/elasticsearch Yes cp -r /etc/elasticsearch /etc/elasticsearch.bak

Notice:
Pay attention to the disk space before backup, and pay attention to the group relationship of files.
You also need to pay attention to file permissions when backing up and restoring.

Modify configuration

Generate elastic-certificates.p12 file

On the master node [any node can also] use /usr/share/elasticsearch/bin/elasticsearch-certutil to generate a certificate file. Need to enter a password, enter admin123graylog
Pay attention to the permission of the certificate file, es needs to have the read and write permission of the certificate file

# generate certificate
/usr/share/elasticsearch/bin/elasticsearch-certutil ca --out /etc/elasticsearch/elastic-certificates.p12
# Give read and write permissions
chmod +660 /etc/elasticsearch/elastic-certificates.p12

image.png
Copy the generated authorization file to the configuration file directory of other nodes, and configure file permissions and group information
Include elastic-certificates.p12 and elasticsearch.keystore
image.png

# file copy copy with permission in the past
scp -rp /etc/elasticsearch/elastic-certificates.p12 [email protected]:/etc/elasticsearch/
scp -rp /etc/elasticsearch/elasticsearch.keystore [email protected]:/etc/elasticsearch/

Modify the /etc/elasticsearch/elasticsearch.yml configuration file

Mainly modify cluster.name, node.name, network.host, transport.port, discovery .seed_hosts, cluster.initial_master_nodes, other configurations do not change
Each node needs to be configured, and the cluster name is configured as graylog in the example [consistent with the cluster name in the original Elasticsearch]

# The cluster role configuration in versions after 7.X is deprecated, and the alarm log will be printed continuously after adding
# node. master: false
# node. data: true

# cluster name
cluster.name: graylog
# Node name (multiple nodes are not repeated)
node.name: graylog01
# Data and log file path (Note: The directory needs to exist in advance, and the script installation will automatically create it)
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
# local ip
network.host: 10.0.107.55
# api port (used by the interface)
http.port: 9200
# Cluster port (used for data synchronization scheduling)
transport.port: 9300
# Cluster discovery (each node configuration is the same)
discovery.seed_hosts: ["10.0.107.55:9300", "10.0.204.70:9300","10.0.169.95:9300"]
# Initialize the master node (the configuration of each node is the same). cluster.initial_master_nodes. When the cluster starts for the first time, you need to configure one or more nodes in cluster A in cluster.initial_master_nodes to elect the master node master when the es cluster is initialized
cluster.initial_master_nodes: ["graylog01"]

The other two nodes also need to be configured. The following is the result of the data viewing configuration.

# View configuration data
cat /etc/elasticsearch/elasticsearch.yml | grep -v "^#" | grep -v "^$"

cluster.name: graylog
node.name: graylog02
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
network.host: 10.0.204.70
http.port: 9200
transport.port: 9300
discovery.seed_hosts: ["10.0.107.55:9300", "10.0.204.70:9300","10.0.169.95:9300"]
cluster.initial_master_nodes: ["graylog01"]
# View configuration data
cat /etc/elasticsearch/elasticsearch.yml | grep -v "^#" | grep -v "^$"

cluster.name: graylog
node.name: graylog01
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
network.host: 0.0.0.0
http.port: 9200
transport.port: 9300
discovery.seed_hosts: ["10.0.107.55:9300", "10.0.204.70:9300","10.0.169.95:9300"]
cluster.initial_master_nodes: ["graylog01"]

Password configuration

xpack password configuration

/etc/elasticsearch/elasticsearch.yml file, add the following configuration (add configuration to each node)

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12

Reset password (master node operation)

You can configure it on the master node and enter the password, for example: admin123graylog

# Change password
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive

# Enter the following, enter the password in the following content
future versions of Elasticsearch will require Java 11; your Java version from [/opt/soft/jdk/jdk-8u351-linux-x64/jre] does not meet this requirement
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y


Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana_system]:
Reenter password for [kibana_system]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
admin123graylogChanged password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

image.png

image.png

Verify

Username: elastic
Password: admin123graylog
You can query whether there is data from other nodes.
image.png

Graylog stand-alone upgrade cluster

Note:
The graylog cluster needs to install clock synchronization service for each node

Configuration file

Modify the configuration file for each node and start the service
Pay attention to the different configuration items is_master, http_bind_address, http_publish_uri of the master node and the slave node

# master node is true, slave node is false
is_master = true
# graylog binding address (each node is the current ip)
http_bind_address=10.0.107.55:9000
# graylog api interface (each node is the current ip) The default is http://$http_bind_address/ It can not be configured
http_publish_uri = http://10.0.107.55:9000/
# Open gray's cross-domain support
http_enable_cors = true
# es cluster address (same for all nodes, the @ and : symbols in the password need to be escaped, refer to)
elasticsearch_hosts = http://elastic:[email protected]:9200,http://elastic:[email protected]:9200,http://elastic:[email protected]:9200
# mongo cluster address (all nodes are the same, only need to write the ip and port of a single node, the @: symbol in the password needs to be escaped, refer to https://www.mongodb.com/docs/manual/reference/connection-string /)
mongodb_uri = mongodb://graylog:[email protected]:27017/graylog?replicaSet=graylog-rs

load balancing configuration

graylog uses nginx to achieve load balancing between the service background interface and the input push interface

nginx opens the stream module

Nginx requires: configure add configuration --with-stream when compiling

Add nginx configuration

nginx.conf adds stream-related content

# Note that forwarding tcp and udp ports cannot be placed in the http block, and the stream block and http block are configured at the same level
stream {
  upstream server_input1{
    server 10.0.107.55:12201;
    server 10.0.204.70:12201;
    server 10.0.169.95.70:12201;
  }

  server {
    # push TCP port
    #listen 22201;
    # push UDP port
    listen 22201 udp;
    proxy_pass server_input1;
  }
}

Separate configuration file configuration graylog service forwarding

http {
  default_type application/octet-stream;

  log_format main '$remote_addr - $remote_user [$time_local] "$request" '
    '$status $body_bytes_sent "$http_referer" '
    '"$http_user_agent" "$http_x_forwarded_for"';

  #access_log /var/log/nginx/access.log main;

  sendfile on;
  #tcp_nopush on;

  keepalive_timeout 65;

  #gzip on;

  upstream graylog_servers{
    server 10.0.107.55:9000;
    server 10.0.169.95:9000;
    server 10.0.204.70:9000;
  }

  server {
    listen 19000;
    server_name 10.0.107.55;
    location / {
      root html;
      index index.html index.htm;
      proxy_set_header Host $http_host;
      proxy_set_header X-Forwarded-Host $host;
      proxy_set_header X-Forwarded-Server $host;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Graylog-Server-URL http://graylog_servers/;
      proxy_pass http://graylog_servers;
    }
  }
}

Close nginx, and specify the configuration file to restart (note that it must be closed and restarted, otherwise the newly added forwarding monitoring port will not take effect)

nginx -s stop
nginx -c /usr/local/nginx/conf/nginx.conf

image.png

Authentication

  1. tcp port verification can use telnet command
  2. udp port verification needs to use netcat tool
nc -vuz 10.0.204.70 22201

image.png

  1. graylog verification, turn off the master node service to test whether the log can be pushed normally

References

  1. Remember the process of migrating Mongodb data from a single machine to a cluster
  2. ES Data Migration