Logging Made Easy Through Graylog Part 1

Logging is an important piece of an organization’s security posture. Logging without organization, searchability, or reporting leads to data being missed. This is the start of a long series that VDA Labs is writing on Graylog. This will be a multi-part series covering a variety of topics including the following items:

  1. Installation, securing, and optimizing the setup part 1
  2. Installation, securing, and optimizing the setup part 2
  3. Domain Controller/DHCP log collection and alerts
  4. File/print server log collection and alerts
  5. Exchange server log collection
  6. IIS log collection
  7. Firewall log collection
  8. Setting up Threat Intelligence
  9. SQL Server

Knowledge is power

knowledge

Although many pieces of software provide logging, that logging can be modified or deleted from a system before it can be used for troubleshooting or post incident analysis. Having an external source of logging from all systems and services can help decrease time spent troubleshooting or correlating logs from separate systems. External logging can also provide insight into what an attacker may have done on systems even if they deleted or modified the local logs.

Graylog provides an answer to this problem by providing a way to export logs and put them in a separate system. That machine can then be used for: alerting, dashboards, reporting, and incident response.

Sizing Your System

For this blog we will be using the following system specs.

  • 6 cores
  • 24GB of memory
  • 500GB of SSD Storage

Graylog is resource intensive, but with the following specs administrators can expect to process a few thousand messages per second. If administrators need more performance, they should start looking at a multi-node configuration. This process is outside the scope of this blog.

DNS

I CANNOT STRESS THIS ENOUGH. YOU MUST HAVE A VALID FQDN WITH A DNS RECORD. 

If a valid DNS record does not exist and you use a IP for your configuration files within Graylog, you will have failures due to missing or invalid Subject Alternative Names (SAN) on your certificates.

Installing Prerequisites

Lets begin by installing and configuring the prerequisites for Graylog functionality.This installation will be done in an Ubuntu 18.04LTS environment. The following packages need to be installed before continuing:

sudo apt-get install apt-transport-https openjdk-8-jre-headless uuid-runtime pwgen

MongoDB

Next lets install MongoDB.

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4
echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list
sudo apt-get update
sudo apt-get install -y mongodb-org

After the installation, make sure to configure MongoDB to start at boot:

sudo systemctl daemon-reload
sudo systemctl enable mongod.service
sudo systemctl restart mongod.service

Elasticsearch

Graylog 3.x requires Elasticsearch 6.x. Using the following guide, lets install and configure Elasticsearch to work with the Graylog install. Starting with the install, lets run the following commands to install Elasticsearch:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
sudo apt-get update && sudo apt-get install elasticsearch
Configure Elasticsearch for Graylog

Elasticsearch is now installed. It is time to name the cluster, this can be done by modifying the configuration file located in /etc/elasticsearch/elasticsearch.yml. 

We want to uncomment and modify the following line to match the the configuration.

cluster.name: graylog

With Elasticsearch installed, tell the system to start the Elasticsearch service during boot process of the OS. This can be accomplished with the following commands:

sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
sudo systemctl restart elasticsearch.service

Securing Our Back End

stay out

Before installing Graylog, lets ensure our back end is secure by adding authentication and encryption to the MongoDB and Elasticsearch nodes. If these steps are skipped, there could be potential badness:

  • Intercept data to and from MongoDB nodes
  • Intercept data to and from Elasticsearch nodes
  • Join rouge Elasticsearch nodes to a multi-node setup

It is especially important to complete these steps if the Graylog instance will be running in the cloud or on internet facing systems.

Securing MongoDB

Adding Authentication To MongoDB

whoareyou

Before we add TLS, lets include authentication to the MongoDB Graylog DB.

First, connect to the DB Instance:

mongo --port 27017

Next, lets connect to the Graylog DB and create an admin user with a strong password.

use graylog;
db.createUser(
  {
    user: "mongo_admin",
    pwd: "password123",
    roles: [ { role: "root", db: "admin" } ]
  }
)
exit

A user is created, lets enable authentication for MongoDB. This can be done by modifying the following line in /etc/mongod.conf:

security:
  authorization: enabled

Adding TLS

Before adding TLS, lets generate a Certificate of Authority (CA) certificate and a server certificate. We will be using self-signed certificates for this project. For production systems an engineer would want to get valid certs from either a trusted internal CA or a trusted external CA.

Generating our CA Certificate

Before we can do anything, lets generate a CA certificate that we can use to sign our server certificates. This can be done with the following command. These commands are ran from /opt/opensslkeys:

 openssl req -out graylogca.pem -new -x509

After doing this we should have two files, graylogca.pem and privkey.pem. Make sure both of these keys are kept secure as they will be used for signing all other certificates on our system. See the output below to see what this process should look like when successful.

CA creation output

Generate our Server Certificate

Now that we have our CA certificate created, lets sign our server certs. To do this we need to run the following commands:

sudo openssl genrsa -out MongoDB.key 2048
sudo openssl req -key MongoDB.key -new -out mongodb.req
sudo openssl x509 -req -in mongodb.req -CA graylogca.pem -CAcreateserial -out mongodb.pem -CAkey privkey.pem

If everything is successful the engineer will receive this output:

Generate Server Key
Generating our MongoDB key
Generate Request
Generating our CSR
generate server cert
Generating our signed server certificate

Combine the private key and the Mongodb certificate with the following steps:

cat mongodb.pem
cp MongoDB.key mongoDB2.pem

Copy the data from mongodb.pem to append the data in mongoDB2.pem. The final product should have the following format.

-----BEGIN RSA PRIVATE KEY-----
Key
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
Certificate
-----END CERTIFICATE-----
Apply TLS Certificate to MongoDB

Now that we have generated our CA and server certificates we need to go modify our MongoDB config located at /etc/mongod.conf. The following lines need to be modified or created.

net:
 ssl:
  mode: requireSSL
  PEMKeyFile: /opt/opensslkeys/mongoDB2.pem
  CAFile: /opt/opensslkeys/graylogca.pem
  allowConnectionsWithoutCertificates: true

With this completed we need to restart MongoDB and check the log file. If everything is working you should see that MongoDB has started and is running.

chmod 444 /opt/opensslkeys/*
sudo service mongod restart
sudo cat /var/log/mongodb/mongod.log

Securing Elasticsearch

 

Configuring TLS

For this project a self signed certificate will be used for the Elasticsearch node. This certificate can be generated using the certutil utility included with Elasticsearch. This tool is located in /usr/share/elasticsearch/bin/elasticsearch-certutil. Do not put a password on the file you create.

sudo /usr/share/elasticsearch/bin/elasticsearch-certutil ca --ca-dn CN=dontsquatme.com
sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip <Server IP> --dns <Server FQDN>

If successful there will be two files:

elastic-certificates.p12
elastic-stack-ca.p12

Move the elastic-certificate to /etc/elasticsearch/certs:

sudo mkdir -p /etc/elasticsearch/certs
sudo cp /usr/share/elasticsearch/bin/elastic-certificates.p12 /etc/elasticsearch/certs
chmod 444 /etc/elasticsearch/certs/*

Next we want to modify our elasticsearch.yml located in /etc/elasticsearch/elasticsearch.yml. Add the following lines to ensure the new configuration is used:

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: none
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/certs/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.verification_mode: none
xpack.security.http.ssl.keystore.path: /etc/elasticsearch/certs/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: /etc/elasticsearch/certs/elastic-certificates.p12
xpack.security.http.ssl.client_authentication: optional

We also want to modify the following line to point to the IP address of our server:

network.host: $ServerIPGoesHere

Restart elasticsearch

sudo service elasticsearch restart

Setting up Authentication

Setting up authentication for Elasticsearch is extremely easy. All we need to do is run the following command.

sudo /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive

Keep track of the passwords you set here as they will be needed later.

Time to Install Graylog

Now all the prerequisites are completed, its time to install Graylog. Run the following commands to start the process:

wget https://packages.graylog2.org/repo/packages/graylog-3.2-repository_latest.deb
sudo dpkg -i graylog-3.2-repository_latest.deb
sudo apt-get update && sudo apt-get install graylog-server
sudo systemctl enable graylog-server.service
sudo systemctl start graylog-server.service

Adding our Self Signed Certificate to the Graylog JVM Trust Store

Without adding the newly generated self signed certificate to our Graylog JVM Trust Store, Graylog will throw errors and not connect to Elasticsearch. To prevent this, run the following commands:

sudo cp -a "${JAVA_HOME}/jre/lib/security/cacerts" /etc/graylog/server/cacerts.jks

The easiest way to find the location of the original JVM Trust Store is to use the following command:

sudo find / -name "cacerts"

As of this writing, the OpenJDK8 keystore is located at /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/security/cacerts which means the following command will copy the data we need. The purpose of copying the keystore is to not modify the global official trust store and instead we want to make a copy for only Graylog to use:

sudo cp -a /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/security/cacerts /etc/graylog/server/cacerts.jks

Copy the previously created .p12 to the Graylog configuration folder and convert it to a format that Java can use:

sudo cp /etc/elasticsearch/certs/elastic-certificates.p12 /etc/graylog/server/elastic-certificates.p12
Convert to a .pem
sudo openssl pkcs12 -in /etc/graylog/server/elastic-certificates.p12 -out /etc/graylog/server/elastic.pem
Convert to a x509 certificate
sudo openssl x509 -outform der -in elastic.pem -out elastic.crt
sudo openssl x509 -outform der -in graylogca.pem -out graylogca.crt

Import it into the new Graylog keystore:
sudo keytool -importcert -keystore /etc/graylog/server/cacerts.jks -storepass changeit -alias elastic-cluster -file /etc/graylog/server/elastic.crt
sudo keytool -importcert -keystore /etc/graylog/server/cacerts.jks -storepass changeit -alias graylogca -file /opt/opensslkeys/graylogca.crt
Configuring Graylog

Now that we have Elasticsearch, MongoDB, and Graylog installed, we need to configure Graylog. The Graylog configuration file is server.conf, its located at /etc/graylog/server/server.conf. Here are the important lines:

  • password_secret
    • This should be a 64 character key used to salt other passwords
  • root_password_sha2
    • This is the default password for user “Admin” it should be set using the following command.
    • echo -n yourpassword | shasum -a 256
    • Copy the resulting answer into this line
  • http_bind_address
    • This is the address the web interface will bind to, this should be the private IP address of your server
  • elasticearch_hosts
    • Again this should point to your IP of your server. Don’t forget to include the username and password you created
    • https://username:password@<Server FQDN>:9200
  • mongodb_uri
    • This will point at the IP address of your server make sure to include the username and password you created
    • mongodb://username:password@Server FQDN:27017/graylog?ssl=true

After this is completed, Graylog should be operational. Reboot your server one more time. Run the following commands to ensure the services have started properly:

service graylog-server status
service mongod status
service elasticsearch status
sudo cat /var/log/elasticsearch
sudo cat /var/log/mongodb
sudo cat /var/log/graylog-server

If everything was setup correctly, browse to http://<Server IP>:9000 and log in to Graylog for the first time with the admin username and password that was created earlier.

graylog7

VDA’s next blog post will cover securing the Graylog web interface and adding the first server to start parsing data from. If you have any questions about application, IoT, or system security please visit VDA’s home page to view our services.