Oct 292012
 
It goes without saying that security is becoming of increasing concern for anyone managing hosts connected to the Internet. You only have to open a new port on your firewall and you can just watch all the port-knocking traffic come rolling in from script kiddies all over the world. The attention is heart-warming at first, but quickly becomes tiresome. Fail2ban is what you need.

3D-printed-ban-hammerIntrusion Prevention software.

Fail2ban is a free, reliable, and easy to configure utility which performs the simple task of watching log files for evidence of suspicious connections, and then locking out traffic from the source IP address.

The default behaviour is to lock out connections for a certain period of time, which doesn’t need to be that long to disrupt and defuse a brute-force attack. When the ban time has elapsed, the ban configuration is reversed, so as to only temporarily inconvenience genuine access attempts which may have been incorrectly configured. Or if you’re feeling particularly zero-tolerant, ban them for an eternity.

Application Architecture

Fail2ban is available in all the standard repositories for the major distributions, and its installation is typical, according to the usual package procedures. For the purposes of this discussion, I’ve been using Fail2ban 0.8.6 installed on Centos 6.2.

The fail2ban configuration files are in the conventional place: /etc/fail2ban/. A discussion of these config files well illustrates how fail2ban works.

/etc/fail2ban/fail2ban.conf

This contains mostly just information about logging – the debug level and the location of the log file. You can log either to the syslog or a file of your choice.

/etc/fail2ban/jail.conf

The main configuration file is /etc/fail2ban/jail.conf, and it contains blocks for each ban case. That is, one block of configuration settings for a given type of failure in a given log file, a specific action will be taken.

Below is the block pertaining to SSH failures, banned by an iptables rules:

[ssh-iptables]

enabled  = true
filter   = sshd
action   = iptables[name=SSH, port=ssh, protocol=tcp]
           sendmail-whois[name=SSH, dest=root, sender=fail2ban@mail.com]
logpath  = /var/log/secure
maxretry = 5

So this states to monitor the file /var/log/secure using the sshd filter, and if it matches to perform the iptables and sendmail-whois actions. Filters and actions are defined in other configuration files.

Filters: /etc/fail2ban/filter.d/

The directory /etc/fail2ban/filter.d/ defines the regular expressions which are used to match on log messages indicating connection failures. This text will vary with different distributions and different versions of these services, so the regular expression sets are fairly extensive, although to be fair they realistically cannot be completely exhaustive. Check for new versions of Fail2ban periodically and keep it up to date.

As an example, the file /etc/fail2ban/filter.d/sshd.conf contains a list of regular expressions which define flagged text. Here is a truncated excerpt:

failregex = ^%(__prefix_line)s(?:error: PAM: )?Authentication failure for .* from s*$
            ^%(__prefix_line)s(?:error: PAM: )?User not known to the underlying authentication module for .* from s*$
            ^%(__prefix_line)sFailed (?:password|publickey) for .* from (?: port d*)?(?: sshd*)?$

You get the idea.

Actions: /etc/fail2ban/action.d/

The actions.d directory contains configuration files which define the verbs of fail2ban – that is, the tasks which can be performed should an alert be generated – “actionban”. Equally, each file also contains an “actionunban” which describes what is executed to undo the effects of the lockout after the ban period has expired.

The relevant lines of the iptables.conf file are displayed below. The first adds a drop rule to the iptables ruleset, and the second line removes the same rule.

actionban = iptables -I fail2ban- 1 -s  -j DROP
#
actionunban = iptables -D fail2ban- -s  -j DROP

The nice thing about fail2ban is that, as you an see here, the configuration is all fairly self-explanatory if you have a reasonable knowledge of shell scripting, and understand what it’s trying to do. The source IP is passed to the “actionban” command after being extracted from the filter, and this command inserts a DROP rule for this IP at the beginning of the firewall ruleset. Similar actions are used for banning source addresses with tcpwrappers.

Some of the actions are for sending notifications of events. The “sendmail-whois” action is rather useful in that it sends an alert mail, but also includes the results of a “whois” invocation so you can see the identity and location of the owner of the IP address from which the incursion is emanating.

How to Configure Fail2ban

The steps involved in configuring fail2ban are as follows:

  • Identify those files on your system that need to be watched – generally these would be the log files for an running services through which an outside party would be able to gain a connection to your host – /var/log/secure (SSH), /var/log/httpd/access_log (Apache HTTPD), or whatever.
  • Edit /etc/fail2ban/jail.conf:
    • select the config block that best suits your purpose, adapt an existing one, or write a new one;
    • set the variable enabled equal to “true”;
    • set your email address, if you want to be notified;
    • set the filter to use, according to what service or process is being monitored;
    • check and make sure that the regex in the action.d config file actually matches what you would expect for an alert.
  • Restart fail2ban: # /etc/init.d/fail2ban restart

Repeat the process for everything you feel needs to be monitored. Any problems, check the man page or the Fail2ban Wiki

Fail2ban is a wonderfully simple and straightforward tool that does exactly what it says, and whose operation and configuration is transparent to the systems administrator. It is by no means perfect, and in a site with a large number of hosts, you’d probably be better off using something centralised like OSSEC. However fail2ban is significantly easier to configure than OSSEC and can easily be rolled out to individual servers on an ad hoc basis.

So there you have your ban-hammer. Go on and wield it with extreme prejudice.
[flattr uid=’matthewparsons’ /]


Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.

Oct 282012
 
It’s a scary thing when your machine doesn’t boot, when you don’t even get an error message, just a single quietly ominous prompt:
grub>

Not what I want to see

Of course, it’s probably worse when you get nothing at all, or a “No Operating System Found”. But the unsettling thing for many sysadmins about the grub> prompt is that it’s a prompt that doesn’t respond to the usual Linux commands. The inline help isn’t hugely helpful either. I’m going to demonstrate a few useful GRUB commands for recovering from a failed boot, and explain a few things about GRUB and bootloaders along the way.

A broken GRUB config typically arises when creating an extra OS partition, or when migrating disks. Depending on the operating system, or its setup options, the old GRUB configuration could be rendered invalid. The usual wisdom is to simply boot from a Live installation CD/DVD into recovery mode and fix everything from there. That’s one way, but I’ve always felt a little more self-sufficient to be able to fix the problem without the extra tool or a USB pen or CD. You don’t need to have written down a lot of long kernel version strings either, as I’ll show you. The commands are easier to remember if you understand the process of what is actually happening.
Continue reading »


Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.

Oct 262012
 
In the spirit of ad hoc, sloppy hackery, here’s a trick I’ve used in the past for providing myself with an escape route when performing risky remote maintenance without a console. There’s nothing worse than irredeemably disconnecting your remote shell and causing yourself the catastrophe of a visit to the data centre, when you should be in the pub. Here’s a little way of taking out some insurance against the your fat fingers, using the “at” daemon.

Broken ethernet cable

Caveat: this is not best practice. The whole reason for adhering to sound strict principles of design is to avoid performing risky procedures which might require this kind of mitigation. Still, it doesn’t hurt to know.

The basic principle of this contrivance is to use the “at” daemon to run a back-out script as a kind of “dead man’s handle”. That is, if your hands come off the controls, this failsafe kicks in and puts things back the way they were – not careening out of control around a sharp bend.

The “at” command

The command at is a relatively unused little tool, despite having been a part of Linux since the beginning, probably. Its more organised uncle, cron, is the one everyone’s familiar with for automating recurring tasks, ad infinitum. But at is for executing one-offs, and so isn’t really an automation tool as such, since you still have to actually type whatever you want it to execute. But it adds a latency to the actual execution of that command and does it later on, when it’s more timely to do so. So it doesn’t save you any work, it just does it later so you don’t have to. You’re down the pub.
Continue reading »


Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.

Oct 242012
 

If you’re being careful and monitoring your Apache HTTPD services to verify that they’re up and serving their pages correctly, then you may find that you get a lot of unnecessary event messages from your monitoring agent in the apache log file. After all, where your logs are concerned, you’re really only interested in actual client connections, and not Nagios checking in every 5 minutes.

It’s simple enough to just filter out all events meeting certain criteria, and then just exclude these lines from the log. To do so, ensure that the SetEnvIf module is enabled, check for this line in /etc/httpd/conf/httpd.conf:

  LoadModule setenvif_module modules/mod_setenvif.so

The documentation for the SetEnvIf module is here, and describes all options and presents examples.

In my case, I wanted to exclude from my Apache HTTPD logging any lines generated by the regular status checks done by either Nagios or the Stingray Zeus Load Balancer. While each of these has a particular IP address which could be used as the exclusion criteria, I wanted to create a more reusable configuration that could be used in different environments. Fortunately, the documentation illustrates that one can also match on the User-Agent string.

TO find out what user-agent strings that the Nagios daemon and Load Balancer were using, I changed the Apache LogFormat from “common”, which I generally use, to “combined”, which also specifies the User-Agent, under “%u”. This showed that Nagios takes a User-Agent string from it’s check command, in this case “check_http/v1.4.15 (nagios-plugins 1.4.15)”, whereas the Stingray Zeus Load Balancer registers as “HTTP-Monitor/1.1”. For brevity, I matched on just the first part of each, and altered my configuration to look like this:


	ServerName www.example.com
	
	SetEnvIf User-Agent "^HTTP-Monitor" dontlog
        SetEnvIf User-Agent "^check_http" dontlog
  	ErrorLog logs/errors.log
  	CustomLog logs/access.log common env=!dontlog

The relevant lines being with “SetEnvIf” which mean that any events with User-Agents matching the regex will get assigned to an environment of “dontlog”. Then, in the CustomLog line, I simply tell it to ignore any events with the environment “dontlog”.

Reload the Apache config, and that’s about it. Now, my logs are significantly smaller, and free of all the background noise of keep-alives and status pings.


Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.