Securing the Elastic Stack

This post appeared originally in our sysadvent series and has been moved here following the discontinuation of the sysadvent microsite

This is the second of three posts about Elastic Stack.

The Elastic Stack service is available to anyone who can reach it by default. This allows you to choose your security level and tools to provide it.

A simple search on Shodan for Kibana or Elasticsearch will quickly reveal that many do not secured their logs. I hope this post will encourage you to do so.

One efficient way to increase security is to place Elasticsearch and Kibana behind a front end web server, add encryption, and require a username and password.

service names

I’ve chosen the host names search.example.com for the Elasticsearch web API, and kibana.example.com for the Kibana web interface. You should choose something within your domain.

web server

I’ve chosen the “Nginx” web server, due to its small size and straightforward configuration. Feel free to choose something else. Other good alternatives are “HAproxy”, “Apache HTTPD”, “hitch” and others.

Configuration

web server

Install Nginx, using yum install nginx.

Add configuration for the listening ports of the upstream applications. They should listen on localhost ([::] is the IPv6 address). TCP port 9200 is default for Elasticsearch, and 5601 for Kibana.

# /etc/nginx/conf.d/elastic.conf
upstream elasticsearch {
  server [::1]:9200;
}
upstream kibana {
  server [::1]:5601;
}

Add virtual host definitions for each:

# /etc/nginx/conf.d/servers.conf
server {
  listen 80;
  listen [::]:80;

  server_name search.example.com;

  location / {
    proxy_pass http://elasticsearch;
  }
}
server {
  listen 80;
  listen [::]:80;

  server_name kibana.example.com;

  location / {
    proxy_pass http://kibana;
  }
}

Reload Nginx.

SELinux

In this example, I’m running on CentOS 7, which has SELinux set to Enforcing. I really don’t want to turn that off, since it keeps the server reasonably secure.

If you attempt to request pages from http://search.example.com, Nginx will answer 502 Bad Gateway. The web server is not allowed to make new network connections, so it can not proxy the request to Elasticsearch.

ssm@turbotape ~ :) % http http://search.example.com
HTTP/1.1 502 Bad Gateway
Connection: keep-alive
Content-Length: 173
Content-Type: text/html
Date: Tue, 29 Nov 2016 12:07:43 GMT
Server: nginx/1.10.2

<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.10.2</center>
</body>
</html>

This will be reflected in the audit logs on the system. A search for recent audit logs matching the command “Nginx” shows this.

$ ausearch -i -c nginx -ts recent
----
type=SYSCALL msg=audit(11/29/2016 07:07:43.971:2167) : arch=x86_64
  syscall=connect success=no exit=-13(Permission denied) a0=0xf
  a1=0x7f95123d8340 a2=0x1c a3=0x7ffff9848770 items=0 ppid=32727
  pid=32728 auid=unset uid=nginx gid=nginx euid=nginx suid=nginx
  fsuid=nginx egid=nginx sgid=nginx fsgid=nginx tty=(none) ses=unset
  comm=nginx exe=/usr/sbin/nginx subj=system_u:system_r:httpd_t:s0
  key=(null)

type=AVC msg=audit(11/29/2016 07:07:43.971:2167) :
  avc:  denied  { name_connect } for  pid=32728 comm=nginx dest=9200
  scontext=system_u:system_r:httpd_t:s0
  tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket

(By the way, note all the juicy details in that log. That would be awesome to search for in the web interface. We’ll get there in part 3.)

We need to allow the web server to connect to the upstream applications. There is a httpd_can_network_connect SELinux boolean we can use, which enables all network connections.

setsebool httpd_can_network_connect on -P

If we retry the HTTP request, we should see “200 OK”.

ssm@turbotape ~ :) % http http://search.example.com
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 327
Content-Type: application/json; charset=UTF-8
Date: Tue, 29 Nov 2016 12:13:23 GMT
Server: nginx/1.10.2

{
    "cluster_name": "elasticsearch",
    "cluster_uuid": "NRmDw4b2QvuCQYlnrRyeSQ",
    "name": "ZVasElj",
    "tagline": "You Know, for Search",
    "version": {
        "build_date": "2016-11-11T22:08:49.812Z",
        "build_hash": "080bb47",
        "build_snapshot": false,
        "lucene_version": "6.2.1",
        "number": "5.0.1"
    }
}

(By the way, both kibana and elasticsearch is running as unconfined services. A targeted SELinux Policy Module is not included in the packages from Elastic yet.)

Password

Install httpd-tools, and add a password file for Nginx.

$ yum -y install httpd-tools
$ htpasswd -c /etc/nginx/users log
New password: <type a password>
Re-type new password: <type the same password>
Adding password for user log

Update your virtual host definitions with auth_basic and auth_basic_user_file:

# /etc/nginx/conf.d/servers.conf
server {
  listen 80;
  listen [::]:80;

  server_name kibana.example.com;

  auth_basic "Who You Be?";
  auth_basic_user_file /etc/nginx/users;

  location / {
    proxy_pass http://kibana;
  }
}
server {
  listen 80;
  listen [::]:80;

  server_name search.local;

  auth_basic "Who You Be?";
  auth_basic_user_file /etc/nginx/users;

  location / {
    proxy_pass http://elasticsearch;
  }
}

HTTPS Certificates

If your server is exposed to the Internet, you should be able to use Let’s Encrypt to generate a HTTPS certificate for each virtual host.

For testing, you can create dummy certificates:

cd /etc/pki/tls/certs/
./make-dummy-cert search.example.com.pem
./make-dummy-cert kibana.example.com.pem

Update your virtual host definitions with HTTPS virtual hosts, and a redirect from HTTP to HTTPS

# /etc/nginx/conf.d/servers.conf
server {
  listen 80;
  listen [::]:80;

  server_name kibana.example.com;
  location / {
    return 301 https://$server_name$request_uri;
  }
}

server {
  listen 443;
  listen [::]:443;
  server_name kibana.example.com;

  ssl on;
  ssl_certificate     /etc/pki/tls/certs/kibana.example.com.pem;
  ssl_certificate_key /etc/pki/tls/certs/kibana.example.com.pem;

  auth_basic "Who You Be?";
  auth_basic_user_file /etc/nginx/users;

  location / {
    proxy_pass http://kibana;
  }
}

server {
  listen 80;
  listen [::]:80;

  server_name search.example.com;
  location / {
    return 301 https://$server_name$request_uri;
  }
}

server {
  listen 443;
  listen [::]:443;
  server_name search.example.com;

  ssl on;
  ssl_certificate     /etc/pki/tls/certs/search.example.com.pem;
  ssl_certificate_key /etc/pki/tls/certs/search.example.com.pem;

  auth_basic "Who You Be?";
  auth_basic_user_file /etc/nginx/users;

  location / {
    proxy_pass http://elasticsearch;
  }
}

A HTTP request should now redirect with 301 Moved Permanently.

ssm@turbotape ~ :) % http  http://kibana.example.com
HTTP/1.1 301 Moved Permanently
Connection: keep-alive
Content-Length: 185
Content-Type: text/html
Date: Tue, 29 Nov 2016 13:21:48 GMT
Location: https://kibana.example.com/
Server: nginx/1.10.2

<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.10.2</center>
</body>
</html>

A HTTPS request should indicate that you need to provide a password with 401 Unauthorized. If you use dummy certificates, you need to tell “HTTP” (or “curl”) not to verify the certificates.

ssm@turbotape ~ :) % http --verify=no https://kibana.example.com
[...]: InsecureRequestWarning: Unverified HTTPS request is being made. [...]
HTTP/1.1 401 Unauthorized
Connection: keep-alive
Content-Length: 195
Content-Type: text/html
Date: Tue, 29 Nov 2016 13:22:20 GMT
Server: nginx/1.10.2
WWW-Authenticate: Basic realm="Who You Be?"

<html>
<head><title>401 Authorization Required</title></head>
<body bgcolor="white">
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx/1.10.2</center>
</body>
</html>

We have now added authentication (client must submit username and password, server must present a valid HTTPS certificate), authorization (well, anyone authenticated can connect) and privacy (client connections are encrypted)

One step closer to a secure log search environment.

Stig Sandbeck Mathisen

Former Senior Systems Architect at Redpill Linpro

Thoughts on the CrowdStrike Outage

Unless you’ve been living under a rock, you probably know that last Friday a global crash of computer systems caused by ‘CrowdStrike’ led to widespread chaos and mayhem: flights were cancelled, shops closed their doors, even some hospitals and pharmacies were affected. When things like this happen, I first have a smug feeling “this would never happen at our place”, then I start thinking. Could it?

Broken Software Updates

Our department do take responsibility for keeping quite a lot ... [continue reading]

Alarms made right

Published on June 27, 2024

Just-Make-toolbox

Published on March 22, 2024