Migrating Graylog Servers - Part 3

This is the third in a multi-part series where I explore the process of transforming an existing Graylog install into a resilient and scalable multi-site installation. Start here for Part 1.

Previously we built our servers and reconfigured ElasticSearch. Next up is to build out the new Graylog2 and Graylog2-Web servers themselves.

Graylog2 Server Build

Software Install

Since I sized and installed the OS instance at the same time as the ElasticSearch nodes I can jump straight to the software install. In my example I'm using Ubuntu Server 14.10 LTS1. Since the last time I set this stuff up Torch has started hosting their own software repos, which makes me happy. We can manually set up and install the software by running

echo "deb http://packages.graylog2.org/repo/debian/ trusty 0.90" > /etc/apt/sources.list.d/graylog2.list
https://raw.githubusercontent.com/Graylog2/graylog2-puppet/master/files/RPM-GPG-KEY-graylog2
wget -qO - https://raw.githubusercontent.com/Graylog2/graylog2-puppet/master/files/RPM-GPG-KEY-graylog2 | apt-key add -
apt-get update
apt-get install graylog2-server graylog2-web

Or if you want to use their repo package

wget https://packages.graylog2.org/repo/packages/graylog2-0.91-repository-ubuntu14.04_latest.deb
dpkg -i graylog2-0.91-repository-ubuntu14.04_latest.deb
apt-get install apt-transport-https
apt-get update
apt-get install graylog2-server graylog2-web

Graylog2 Cluster Join

Actually joining the Graylog2 instances to the cluster is pretty brain-dead easy as cluster membership happens using the MongoDB instance. However, unlike ElasticSearch there are a lot of changes we need to make to the configuration file. The settings I've listed below are the minimums. Please don't entirely replace your /etc/graylog2.conf with this. You need to read through the file and make at least these changes2.

is_master = false
password_secret = ${Password_Secret_From_Legacy_Server}
root_password_sha2 = ${Password_Hash_From_Legacy_Server}
rest_listen_uri = http://${Public_IP_of_Server}:12900/
retention_strategy = delete
elasticsearch_shards = 5
elasticsearch_replicas = 1
elasticsearch_index_prefix = graylog2
elasticsearch_cluster_name = ${ES_Cluster_Name}
elasticsearch_node_name = ${Server_Hostname}
elasticsearch_discovery_zen_ping_multicast_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts = east-es01.east.example.com:9300,west-es01.west.example.com:9300,graylog.example.com:9300
elasticsearch_cluster_discovery_timeout = 10000
mongodb_useauth = false
mongodb_host = graylog.example.com
mongodb_database = graylog2
mongodb_port = 27017

The most important things to notice here are

  • is_master: Unlike ElasticSearch this explicitly sets who is the master and like The Game "There can be only one."
  • rest_listen_uri: By default this is set to localhost, in order for the non-local graylog2-web instances to work this has to be externally reachable.
  • elasticsearch_cluster_discovery_timeout: I made this number big since trans-continental links are relatively high latency.
  • mongodb_host: Points at the MongoDB master, currently the legacy server.

At this point we can start the graylog2-server service and it should be pretty automagic. It will auto-join the graylog2-server cluster based on data in the MongoDB instance. It will also join the ElasticSearch cluster as a client node.

Graylog2 Web Setup

The Graylog2 Web config file is significantly shorter, and easier to deal with, than the Server itself. Edit the file /etc/graylog2/web/graylog2-web-interface.conf and make sure the following are set.

graylog2-server.uris="http://east-es01.east.example.com:12900/,http://west-es01.west.example.com:12900/,http://graylog.example.com:12900"
application.secret="${Password_Secret_From_Legacy_Server}"
timeout.DEFAULT=10s

Now start up the graylog2-web service and we should be good to go. Note that we don't actually point the web service at either the ElasticSearch cluster or the Mongo database. All of the information shown in the web interface comes through Graylog2 Server, including user accounts. The data is still stored in the Mongo database it's just not directly accessed by the web app.

Now we have our new server and web interfaces up. Huzzah! At this point nothing is actually using the Graylog2 Servers, though, since all of the logs are still being shipped to the legacy system. None of that will happen until after we retire the legacy server and start migrating hosts.


  1. Because DevOps.

  2. The point of this blog is for me to document my process and to hope that others can learn from my mistakes and can start their project with at least more information than I did when I started mine. The ability to copy/paste is not a substitute for a basic understanding of what the fuck is going on.