It can probably be understood from my previous blog post that if it was up to me, I’d avoid products like CrowdStrike - but every now and then I still have to install something like that. It’s not the idea of “security software” per se that I’m against, it’s the actual implementation of many of those products. This post lists up some properties that should be fulfilled for me to happy to install such a product.
Free and Open Source Software
I admit I may be a bit fanatic on this, and I realize that there are different points of view on this - but I strongly favor software where the source code is available both for me and anyone else that would want to look into it, where also the revision control history as well as a good changelog is available for me, where I can compile the code myself if I wish to, and where I’m free to fix things that are broken and publish my patches. Some reasons for this:
- I do have a theoretical possibility to do a complete audit of the software
- A quick peek at the source code is often enough to deduct if it’s good quality code or a heap of spaghetti
- Access to the revision control may also be important when doing a security review. Is it a one-man project, or does the project have multiple contributors?
- I would also insist that free licenses gives improved security - it makes it possible to maintain the software even if the product owner goes bankrupt or stops maintaining the product. It also ensures the software is improved and that bugs are fixed, even if the company owning the product does not find a financial motivation in paying it’s developers for doing such work.
Debugging problems is also a lot easier when not having to deal with a black box.
Sane installation and upgrade procedures
It should be easy not only to install the package, but also keeping it up-to-date using automated and standard tools. For old, stable and commonly used software, that means being included in the packages wrapped by the OS distributor. That’s not always an option, the second best is if the vendor can set up a dedicated package repository, serving the latest versions of the software directly from the source packaged for the operating system in use. That does involve some setup and costs for the software distributor, particularly if one wants to support many different operating systems and Linux distributions. The third best is to have a git repository somewhere with the latest version always available, and where it’s possible to follow some branch to subscribe to bugfixes and security releases. That lifts some of the costs from the developer over to us sysadmins, but at least it makes it possible for us to set up routines for installing/upgrading the software.
Having an rpm or deb package available may make it easier to install and uninstall the package on RedHat, Debian and derivatives, but it does not give any path for automatic updates - so it’s not on my list - and a package that requires special configuration steps to be done during the installation process does not make it particularly easy to automate the process. All configuration should be done through a configuration file, alternatively through environmental variables, but not having to combine those.
One may probably trust the package if it was downloaded from a web server controlled by the software vendor, with good SSL-certificates and certificate handling practices in place, without redirects to arbitrary-looking URLs on a third-party CDN or cloud provider. However, best practice is still to have the package signed - or at least the checksum should be verified. Most distributions comes with standard tools for signing and verifying packages published to some repository - and then the package transport does not matter anymore, it’s possible to trust the package no matter if it’s downloaded through http, ftp, from a mirror, through a torrent, ipfs, copied over floppy disks from a neighbour or fetched via some third-party cloud provider or CDN-network.
I’ve encountered many weird installation procedures - some vendors likes to make it easy, suggesting that we install things by running curl https://gettheacmeapp.sh/install.sh | bash -s
. Luckily, so far I haven’t encountered any security-related software package with this in the official installation instructions.
Secure handling of configuration secrets and collected data
Secrets like credentials for logging into remote systems should be kept either in a secure configuration file or as an environmental variable. They should not be embedded in the binary, stored in the systemd configuration or sent as a command-line option to the process.
Strong encryption on the data exported is paramount. I’ve seen tools sending away data unencrypted. Nowadays this is often solved by using https. I would like to be able to check for myself that the TLS setup on the remote host is secure. Having the remote host name visible in configuration files and not hidden in the binary is a good thing.
File names and locations
There is the Filesystem Hierarchy Standard that describes i.e. that configuration files should live under /etc
while log files should live under /var/log
. There are good reasons for having a standard - system administrators may assume the standards are followed when doing disk allocation, dealing with permissions, setting up backup routines, logfile rotation, setting up tripwire-style tools for security, searching for information when something goes wrong, etc, so I expect software components that I’m installing to follow those conventions.
File names should be consistent. If the service is called fooblattibar
, then I expect to find files like /usr/bin/fooblattibar
, fooblattibar.service
, fooblattibar.log
, /etc/fooblattibar.d/*.yaml
, etc. It may be annoying and/or confusing if the config file should be /etc/fooblatti.yaml
while the service is called fbb
. Perhaps someone thought it would be “more secure” to make arbitrary file names, or perhaps the file names are using different short cuts that makes sense for the main developer (and only him) … then it’s beyond annoying, then it looks more like a virus trying to hide itself than a legitimate software package.
Is it too much to ask for a program that gives useful output if starting it with parameters --help
or --version
? My colleague also wants man-pages.
Keeping the data collected locally
For quite some of those services one should install some “black box”-software that collects data locally and ships it to the security company. In some cases the security company does not handle the data themselves, but pushes it to some cloud solution.
This may be good for “observability”, but I do have issues with this. From a security point of view, I should at least have overview of what data is going out of my systems. I also think it’s a good idea to “leak” as little data as possible. If the purpose is to scan for security incidents (and not to make predictions on when the system will run out of disk etc), then why not keep the ruleset locally, and ensure that only information that is flagged as indicators for security incidents are sent to the security company?
Information sent out of the box should preferably not be sent to cloud-solutions based in faraway countries, preferably the jurisdiction of the chain of companies handling or having access to the data should not be too different from the jurisdiction applying to the company owning the data.
Information flow should be compatible with an egress firewall
If the software communicates over the internet, then it should communicate towards a service that has a fixed IP address so it’s easy to open up holes in the firewall, and it should also support using a proxy, through standard environmental variables.
Round-up
I’m not going to name any products here, but I’ve just recently been installing software on behalf of a third party because the customer is paying said third-party for “improved security” - and the software has been defying most of my expectations above - many of the expectations being what I consider to be sound security practices.
I do suspect that some companies have much more focus on marketing, sales and perhaps also lobbyism than the product they are selling. Purpose of the software is probably to allow people to tick off checkboxes on a form, perhaps the ticking of the checkboxes are required to win a tender or to get some ISO certification. Perhaps someone in management gets a “warm and fuzzy feeling” because they’ve actually done something to improve the security.
As a sysadmin I get the very opposite of a “warm and fuzzy feeling”. I’m installing some black-box software, I have no idea what it is doing, there is very little information to be found on what it is supposed to be doing, and no way to know for sure that it does what it is supposed to be doing. In my point of view, this breaks the integrity of the systems I’m maintaining. When I have to deviate from sound security policies to get the things installed, I’m starting to think that the people who made the product are not much competent on security. Hence, my gut feeling is that the most secure thing to do would be not to install said software.
I may be wrong. There is for sure a security value in collecting and processing information from the systems, maybe there is the chance that this software will cause some security vulnerability or suspicious network activity to be discovered. The best would be to catch things like this without help from any third parties - but “defence in depth” is usually a good thing.