This post appeared originally in our sysadvent series and has been moved here following the discontinuation of the sysadvent microsite

Logging the elapse time in the web server access log can be very useful for statistics and problem solving. I usually throw it in at the end of the log line (that’s generally compatible with existing log-analysis tools like awstats), and on a keyword=value format so that it’s easy to grep out. There is a small performance cost, though.

The elapse time includes everything - when filtering out log lines with high elapse-time you may typically find mobile clients downloadi ng large files - their local bandwidth being the bottle neck. However, sum up the elapse time and group by the request, and most likely the requests “consuming” the most elapse-time will also be the requests consuming most server resources.

If the bottle neck is on the clients bandwidth side you’d like a high max-connections number. If the bottle neck is on local resources, i.e. CPU cycles, you’d like to keep a low max-connections number, probably not much higher than the number of CPUs available.

Apache

I typically throw something like this in the configuration:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" elapsed=%Dus combined

Eventually, other things that may be useful to log: vhost, port, X-Forwarded-For:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" elapsed=%Dus vhost=%v port=%p remoteip=%{X-Forwarded-For}i" combined

Then make sure to use the “combined” log format in the access log:

CustomLog /var/log/apache2/www.example.com-access.log combined

Here is an example command for analyzing the log:

cat /var/log/apache2/*access.log |
perl -ne '($req)=/ \"([A-Z]* \/[^ ]*) HTTP/; /elapsed=(\d+)us/; $elapsed{$req}+=$1; sub END { for my $k (keys %elapsed) { printf "% 6d %s\n", $elapsed{$k}/1000000, $k; }}' |
sort -nr | less

You might want to tune that regexp if you get a long tail.

Note: Apache logs are located under /var/log/httpd/ on RedHat with derivatives.

Nginx

I typically throw something like this into the Nginx configuration:

log_format  main  '$remote_addr - $remote_user [$time_local] $request '
                  '"$status" $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for" '
                  '"http-host=$host" "elapsed=${request_time}s" '
                  '"scheme=${scheme}"';
    access_log  /var/log/nginx/access.log main;

Here’s an example on how one can do simple statistics on the access log:

perl -ne '($req)=/\] ([[:upper:]]+ \/[^ ]{0,40})/; /elapsed=(\d+(\.\d+)?)s/; $elapsed{$req}+=$1; sub END { for my $k (keys %elapsed) { printf "%6.2f %s\n", $elapsed{$k}, $k; }}' access.log | sort -rn | less

The URL is shortened to forty characters here, YMMV. If the tail is very long, you’d probably want to adjust the regexp so more requests get grouped together.

Tobias Brox

Senior Systems Consultant at Redpill Linpro

Tobias started working as a developer when he finished his degree at The University of Tromsø. He joined Redpill Linpro as a system administrator a decade ago, and have embraced working with our customers, and maintaining/improving our internal tools.

Comparison of different compression tools

Working with various compressed files on a daily basis, I found I didn’t actually know how the different tools performed compared to each other. I know different compression will best fit different types of data, but I wanted to compare using a large generic file.

The setup

The file I chose was a 4194304000 byte (4.0 GB) Ubuntu installation disk image.

The machine tasked with doing of the bit-mashing was an Ubuntu with a AMD Ryzen 9 5900X 12-Core ... [continue reading]

Why TCP keepalive may be important

Published on December 17, 2024

The irony of insecure security software

Published on December 09, 2024