No More Secrets: Logging Made Easy Through Graylog Part 2

Logging is a important but often overlooked part of an organization’s security posture. Logging without organization, searchability, or reporting leads to data being missed. This a continuation of a longer series that VDA Labs is writing on Graylog. This is part 2 of a multi-part series covering a variety of topics, including the following items:

  1. Installation, securing, and optimizing the setup part 1
  2. Installation, securing, and optimizing the setup part 2
  3. Domain Controller/DHCP log collection and alerts
  4. File/print server log collection and alerts
  5. Exchange server log collection
  6. IIS log collection
  7. Firewall log collection
  8. Setting up Threat Intelligence
  9. SQL Server

This week focuses on securing the Graylog web interface and some basic optimization for Graylog.

Securing our Graylog Web Interface

private

In our last blog, VDA covered setting up and securing MongoDB and Elasticseach for use with Graylog. We also installed Graylog and connected to our MongoDB and Elasticsearch instances securely using TLS. Now, we need to focus on securing the web and API interfaces of our Graylog instance. To do this, we first need need to generate our new Graylog certificate and import the corresponding key.

To start, we will need to generate a new certificate request. This can be done by creating a file named openssl-graylog.cnf. Inside of this file, we want the following items:

[req]
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no

# Details about the issuer of the certificate
[req_distinguished_name]
C = US
ST = Some-State
L = Some-City
O = My Company
OU = My Division
CN = graylog.example.com

[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

# IP addresses and DNS names the certificate should include
# Use IP.### for IP addresses and DNS.### for DNS names,
# with "###" being a consecutive number.
[alt_names]
IP.1 = 203.0.113.42
DNS.1 = graylog.example.com

With our request created, we can start the process of generating our new certificates and importing our files into the keystore.

sudo openssl genrsa -out /opt/opensslkeys/GraylogWeb.key 2048
sudo openssl req -key /opt/opensslkeys/GraylogWeb.key -new -out /opt/opensslkeys/GraylogWeb.req -config /opt/opensslkeys/ssl.conf -sha256
sudo openssl x509 -req -in /opt/opensslkeys/GraylogWeb.req -CA /opt/opensslkeys/graylogca.pem -CAcreateserial -out /opt/opensslkeys/GraylogWeb.pem -CAkey /opt/opensslkeys/privkey.pem -extfile ssl.conf -extensions v3_req -sha256
cp /opt/opensslkeys/GraylogWeb.* /etc/graylog/server/
sudo openssl x509 -outform der -in /etc/graylog/server/GraylogWeb.pem -out /etc/graylog/server/GraylogWeb.crt
sudo keytool -importcert -keystore /etc/graylog/server/cacerts.jks -storepass changeit -alias GraylogWeb -file /etc/graylog/server/GraylogWeb.crt
 

Next we want modify the following lines in our server.conf located at /etc/graylog/server/server.conf.

 
http_publish_uri = https://<FQDN>:9000/ 
http_enable_tls =true
http_tls_cert_file = /etc/graylog/server/GraylogWeb.pem
http_tls_key_file = /etc/graylog/server/GraylogWeb.key

To activate the new configuration, restart the Graylog server with the following command:

sudo service graylog-server restart

We should now see the Graylog web page offer our new certificate when we navigate to https://<FQDN>:9000.

graylog11

Once we have logged in, browse to system/overview and verify your cluster is online and connected to Elasticsearch.

graylog12

Basic optimization

Now that our server is online and secure, let’s do some basic optimization to improve performance and provide room for log bursts to be stored until the they can be processed. We will be modifying the configuration stored in /etc/graylog/server/server.conf.

Increasing Search Performance And Reliability

Add the following at the bottom of the configuration file:

thread_pool:
 search:
  size: 100
  queue_size: 5000

This setting will increase the maximum search size. Without increasing this limit, we ran into issues when we attempted to search larger ranges of historical log entries.

Increase the Default Journal Size

For smaller setups, having a larger journal gives Graylog room to buffer logs when it becomes overwhelmed by the total number of logs coming in. While journaling can prevent the server from dropping log messages during periods of heavy load, alerting on the logs may be delayed because the events aren’t processed immediately.  The message_journal_max_age and message_journal_max_size lines set the operating parameters for the journal. We’ve had success with these values in the past:

message_journal_max_age = 24h
message_journal_max_size = 50gb

Increase Default Elasticsearch and Graylog Memory Heap

Increasing the amount of memory allocated to the Elasticsearch and Graylog processes greatly improves their performance. As a rule of thumb, the server should have 12.5% of total memory allocated to Graylog, 25% of total memory allocated to Elasticsearch, and 50% of total memory left over for buffers. Ensure that Elasticsearch is not given more than 30GB of memory for its heap as it will start compressing objects in memory, which will dramatically hurt performance.

Graylog Memory Heap

To modify the Graylog memory heap size, edit /etc/default/graylog-server and look for the line GRAYLOG_SERVER_JAVA_OPTS=. On that line, we want to modify the following values:

-Xms#g -Xmx#g

The value of # should be how much memory you want to give Graylog’s Java heap.

Elasticsearch Memory Heap

To modify the Elasticsearch memory heap size, edit /etc/elasticsearch/jvm.options and modify the following lines:

-Xms#g
-Xmx#g

The value of # should be how much memory you want to give the Elastisearch Java heap.

After a final reboot, you should have a system secured by TLS for the Web, Elasticsearch, MongoDB, and the Graylog API. All that we need to do now is start adding data and we can start building dashboards and alerts. Stay tuned for our next section on DHCP/Domain Controller logs, building our first alerts, and setting up indexes.