Oct 292012
It goes without saying that security is becoming of increasing concern for anyone managing hosts connected to the Internet. You only have to open a new port on your firewall and you can just watch all the port-knocking traffic come rolling in from script kiddies all over the world. The attention is heart-warming at first, but quickly becomes tiresome. Fail2ban is what you need.

3D-printed-ban-hammerIntrusion Prevention software.

Fail2ban is a free, reliable, and easy to configure utility which performs the simple task of watching log files for evidence of suspicious connections, and then locking out traffic from the source IP address.

The default behaviour is to lock out connections for a certain period of time, which doesn’t need to be that long to disrupt and defuse a brute-force attack. When the ban time has elapsed, the ban configuration is reversed, so as to only temporarily inconvenience genuine access attempts which may have been incorrectly configured. Or if you’re feeling particularly zero-tolerant, ban them for an eternity.

Application Architecture

Fail2ban is available in all the standard repositories for the major distributions, and its installation is typical, according to the usual package procedures. For the purposes of this discussion, I’ve been using Fail2ban 0.8.6 installed on Centos 6.2.

The fail2ban configuration files are in the conventional place: /etc/fail2ban/. A discussion of these config files well illustrates how fail2ban works.


This contains mostly just information about logging – the debug level and the location of the log file. You can log either to the syslog or a file of your choice.


The main configuration file is /etc/fail2ban/jail.conf, and it contains blocks for each ban case. That is, one block of configuration settings for a given type of failure in a given log file, a specific action will be taken.

Below is the block pertaining to SSH failures, banned by an iptables rules:


enabled  = true
filter   = sshd
action   = iptables[name=SSH, port=ssh, protocol=tcp]
           sendmail-whois[name=SSH, dest=root,]
logpath  = /var/log/secure
maxretry = 5

So this states to monitor the file /var/log/secure using the sshd filter, and if it matches to perform the iptables and sendmail-whois actions. Filters and actions are defined in other configuration files.

Filters: /etc/fail2ban/filter.d/

The directory /etc/fail2ban/filter.d/ defines the regular expressions which are used to match on log messages indicating connection failures. This text will vary with different distributions and different versions of these services, so the regular expression sets are fairly extensive, although to be fair they realistically cannot be completely exhaustive. Check for new versions of Fail2ban periodically and keep it up to date.

As an example, the file /etc/fail2ban/filter.d/sshd.conf contains a list of regular expressions which define flagged text. Here is a truncated excerpt:

failregex = ^%(__prefix_line)s(?:error: PAM: )?Authentication failure for .* from s*$
            ^%(__prefix_line)s(?:error: PAM: )?User not known to the underlying authentication module for .* from s*$
            ^%(__prefix_line)sFailed (?:password|publickey) for .* from (?: port d*)?(?: sshd*)?$

You get the idea.

Actions: /etc/fail2ban/action.d/

The actions.d directory contains configuration files which define the verbs of fail2ban – that is, the tasks which can be performed should an alert be generated – “actionban”. Equally, each file also contains an “actionunban” which describes what is executed to undo the effects of the lockout after the ban period has expired.

The relevant lines of the iptables.conf file are displayed below. The first adds a drop rule to the iptables ruleset, and the second line removes the same rule.

actionban = iptables -I fail2ban- 1 -s  -j DROP
actionunban = iptables -D fail2ban- -s  -j DROP

The nice thing about fail2ban is that, as you an see here, the configuration is all fairly self-explanatory if you have a reasonable knowledge of shell scripting, and understand what it’s trying to do. The source IP is passed to the “actionban” command after being extracted from the filter, and this command inserts a DROP rule for this IP at the beginning of the firewall ruleset. Similar actions are used for banning source addresses with tcpwrappers.

Some of the actions are for sending notifications of events. The “sendmail-whois” action is rather useful in that it sends an alert mail, but also includes the results of a “whois” invocation so you can see the identity and location of the owner of the IP address from which the incursion is emanating.

How to Configure Fail2ban

The steps involved in configuring fail2ban are as follows:

  • Identify those files on your system that need to be watched – generally these would be the log files for an running services through which an outside party would be able to gain a connection to your host – /var/log/secure (SSH), /var/log/httpd/access_log (Apache HTTPD), or whatever.
  • Edit /etc/fail2ban/jail.conf:
    • select the config block that best suits your purpose, adapt an existing one, or write a new one;
    • set the variable enabled equal to “true”;
    • set your email address, if you want to be notified;
    • set the filter to use, according to what service or process is being monitored;
    • check and make sure that the regex in the action.d config file actually matches what you would expect for an alert.
  • Restart fail2ban: # /etc/init.d/fail2ban restart

Repeat the process for everything you feel needs to be monitored. Any problems, check the man page or the Fail2ban Wiki

Fail2ban is a wonderfully simple and straightforward tool that does exactly what it says, and whose operation and configuration is transparent to the systems administrator. It is by no means perfect, and in a site with a large number of hosts, you’d probably be better off using something centralised like OSSEC. However fail2ban is significantly easier to configure than OSSEC and can easily be rolled out to individual servers on an ad hoc basis.

So there you have your ban-hammer. Go on and wield it with extreme prejudice.
[flattr uid=’matthewparsons’ /]

Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.

Oct 242012
The best practice for maintaining a Linux server is to run the smallest optimal set of software. That is, there should be nothing running that isn’t being used, and ideally nothing should be installed that isn’t necessary. But the default installation will give you more than you need. The fat needs to be trimmed.
Hea...hedge-trimming, Old Bolingbroke - - 1565200

There are several good reasons for doing this:

  • Minimize resource utilization – disk space, network bandwidth, CPU time and RAM are all finite; there is no reason to waste these quantities on processes which don’t have a purpose.
  • Greater security – the fewer open ports and running processes, the few vulnerabilities there are likely to be on the system.
  • Easier maintenance – with less software there are fewer dependencies to worry about, less to upgrade, and patching is less problematic.
  • Elegant simplicity – management of a host is easier and cleaner if every component has its utility and nothing is extraneous.

A server freshly built from a Linux DVD – whatever distribution it is – will be automatically installed with a set of software suitable to its function. During the installation process, one selects “Basic Server” or “Desktop”, and a predetermined selection of software packages are selected accordingly. The thing to remember is that this selection is for a general case, and is rarely exactly applicable for any one host in particular. For example, if one were to select the “Web server” option for some distributions, one might by default get both Apache HTTPD and NGINX, although only one of these would probably be used at a time. It would be necessary to uninstall the extra package.
Continue reading »

Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.

May 062012
Data security is a necessary part of Linux systems administration. The primary focus tends to be on guarding against external penetration, then on securing systems from internal breach by nefarious workers. One facet which is frequently neglected is how to safeguard residual data after systems are decommissioned. The problem is, the rm command just isn’t sufficient. If you want your data destroyed beyond recovering, then shred is the tool to use.

Shredded Documents of US Embassy

Hard disks are essentially just spinning platters of magnetised rusty iron. Data is represented by a vast series of minuscule charged patterns. There’s probably more to it than this. I’ll leave the physics as a research exercise for the reader.

On top of the raw hard disk drive, conceptually, is the filesystem. In general, file data is stored in contiguous blocks, indexed by a number known as an inode. The inode is listed within the directory object. So when a file is deleted, this inode is simply removed from the directory index, but the file data itself still exists untouched, and orphaned as it were, and will continue to exist until it is overwritten later as the space is reused.

The problem here, of course, is that rm’ing a file doesn’t actually delete it. The data contents still exists. The only way to indelibly delete a file is to overwrite it. Even then, it’s possible with specialised forensic software to recover overwritten data. So to securely destroy a file, it must be overwritten again, and again, preferably with obnoxious junk data. The little-used shred utility does just this.

Shred Example

The command itself is simple and straightforward. To destroy a file, specify the number of wipes to perform, and the name of the file. You can also view the progress of the command with the “verbose” switch:

  # shred -u -n 30 -v customer_data.txt

If the “-u” option is omitted, the file will remain, but will be overwritten with gibberish. If the “-n” option is dropped, then the default is to make 25 passes.

You can even destroy the filesystem itself, and shred the raw disk device. This would be handy if you were decommissioning old systems, upgrading disks, or even if you were selling an old hard drive on eBay. If so, you’d use a command like this:

  # shred -u -n 20 /dev/sda

This command is so serious that you’d actually need to reformat the disk afterwards in order to render it usable again.

It should go without saying that the commands illustrated in this article are as destructive as it’s possible to be without a bucket of salt water. I accept no responsibility for accidental shredding.

Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.