May 312012

Selinux works pretty well straight out of the box, but I have learnt that when application reconfigurations mysteriously result in failures to startup – then Selinux is generally the culprit. In particular, if any file locations are customised from what’s installed, Selinux will lock it down, and for the uninitiated, the debugging process can be quite confusing. Apache webserver (HTTPD) provides a good example of what happens in this scenario, and I’ll demonstrate how to make it work with the Document Root in a different location.

Selinux can be distinctly daunting for Linux sysadmins, and often seems to be more trouble than it’s worth. However it is an extremely powerful tool which increases security and encourages rigour in configuration. Essentially, Selinux augments the usual Unix file permissions and ownership by adding more granular application-specific contexts as well. I won’t go into the details here, as it’s well documented elsewhere.

What I’ll aim to show here is two ways for relocating the Apache DocumentRoot directory – the proper way, and a quicker dirtier way.

Say, you decide to situate the Apache webserver root under /u01/www/html instead of the default /var/www/html. You edit the /etc/httpd/conf/httpd.conf and change the DocumentRoot line to:

DocumentRoot /u01/www/html

Copy your HTML files into place, change the directory and file onwerships, and permissions:

# chown -R apache. /u01/www/html
# chmod 755 /u01/www/html
# chmod 644 /u01/www/html/*

(NB: That’s not a typo – you can type “apache.”, with the dot at the end, in chown rather than “apache:apache” and save yourself a few keystrokes).

Restart httpd to make sure, and browse to your web page. You’ll get something like this in your browser:

You don't have permission to access / on this server.

And in /var/log/httpd/error_log:

[client] (13)Permission denied: access to /index.html denied

This is because the Apache back-end process, running as user “apache”, can’t access the file /u01/www/html/index.html – even though we granted all the correct permissions. What gives? Turns out, with Selinux, having regular old permissions and ownership isn’t enough.

Here’s how you can fix this, nice and quickly.

First things first, check that you’ve got Selinux running, and that this is in fact the source of your problem:

  # getenforce

Yep. Next, check the Selinux context of the original HTTPD DocumentRoot directory. The ls command has the “-Z” switch which displays this:

# ls -ldZ /var/www/html
drwxr-xr-x. root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html

Take note of the colon-delimited string “system_u:object_r:httpd_sys_content_t:s0”. Take note of this string; you’ll need it later. Each field is:

  • User: system_u
  • Role: object_r
  • Type: httpd_sys_content_t
  • Level: s0

Checking the Selinux context on our new erroneously configured folder, we see this:

# ls -Z file1
drwxr-xr-x. root root unconfined_u:object_r:default_t:s0 /u01/www

So this explains why Apache can’t read the index.html file – it’s got the wrong Selinux context for Apache to read it. You can see how this enhances security. Apache can never server up a page unless it has explicitly got a context set that matches Apache privilege.

To reset the directory and file contexts, you need to ensure that the semanage software is installed (it may not be, depending on what package groups you have):

  # yum whatprovides */semanage
  # yum -y install policycoreutils-python

A very full discussion of how to change Selinux policies and file contexts is given here, but briefly, the steps are:

  # semanage fcontext -a -t httpd_sys_content_t "/u01/www(/.*)?"
  # restorecon -Rv /u01/www
restorecon reset /u01/www context unconfined_u:object_r:var_t:s0->system_u:object_r:httpd_sys_content_t:s0
restorecon reset /u01/www/html context unconfined_u:object_r:var_t:s0->system_u:object_r:httpd_sys_content_t:s0
restorecon reset /u01/www/html/index.html context unconfined_u:object_r:var_t:s0->system_u:object_r:httpd_sys_content_t:s0

The semanage argument “u01/www(/.*)?” is a regular expression indicating the context should apply to the directory “/u01/www” and everything under it.

Quick and Dirty Fix
If, however you don’t have access to the semanage program or if you just need to make a small ad hoc change to a file context, you can simply set this manually. Remember that string coloured red earlier? Grab the relevant text from it.

  # chcon -R -u system_u -t httpd_sys_content_t /u01/www/

This command uses the “-R” switch so will apply recursively to all files under the directory. Note that the difference between this command and the semanage command is that chcon will not update the Selinux policy.

So the proof of the pudding will be in the reloading. Ctrl-R your browser and BOOM! Your page is back and being happily server up.

Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.

May 082012

The primary use of iptables is as a host-based software firewall. It intercepts network traffic and applies filters to it, deciding what to permit, and what to deny. Iptables is very flexbile, but in its most common configuration it is used to simply filter incoming connections based on source, destination, protocol and port. In other functions, it is used to log traffic, filter outgoing connections, masquerade IP addresses for NAT and limit traffic rates. It is indispensable for maintaining the security of a server, but there’s other things its functions can be used for – simulating application failures.

It’s a relatively common testing scenario – you want to find out what happens when a service or resource becomes unavailable, and want to see how other applications handle this. For example, if your MySQL database goes down, what will Tomcat do? You could simulate this by shutting down the database, but if there are other nodes in the cluster, or other users accessing the database, this may not be possible.

A nice simple solution is to make an ad ad hoc addition to the iptables on the server that is noticing the failure. In the Tomcat/MySQL example, this will be a change to the application host running Tomcat. Since it’s attempting a connection to the MySQL database on port 3306, all that you need to do is configure iptables to block outgoing traffic with a destination port of 3306. Like this:

   # iptables -A OUTPUT -p tcp --dport 3306 -j DROP

Check that the iptables ruleset has been updated with:

   # iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
RH-Firewall-1-INPUT  all  --             

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
RH-Firewall-1-INPUT  all  --             

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DROP       tcp  --             tcp dpt:3306    <<====== This line here has been added

Chain RH-Firewall-1-INPUT (2 references)
target     prot opt source               destination         
ACCEPT     all  --             
ACCEPT     icmp --             icmp type 255 
ACCEPT     all  --             state RELATED,ESTABLISHED 
ACCEPT     tcp  --             state NEW tcp dpt:22 /* SSH */ 
ACCEPT     tcp  --             state NEW tcp dpt:80 /* http */ 
ACCEPT     tcp  --             state NEW tcp dpt:8080 /* tomcat */ 
REJECT     all  --             reject-with icmp-host-prohibited 

This will have the effect of intercepting all outbound traffic heading to port 3306 on any destination host, and dropping it outright. The application will be unable to reach the database, although all other hosts will have no interruption of service.

One your testing is complete, you can undo the previous command by simply replacing the "-A" option ("append") with "-D" ("delete"):

   # iptables -D OUTPUT -p tcp --dport 3306 -j DROP

There are many other ways of using this technique. You could block a hosts ability to use DNS (UDP, port 53):

   # iptables -A OUTPUT -p udp --dport 53 -j DROP

Prevent a host from accessing Internet websites (HTTP, port 80 and 443):

   # iptables -A OUTPUT -p tcp --dport 80 -j DROP
   # iptables -A OUTPUT -p tcp --dport 443 -j DROP

Or block a host from reaching anything that isn't a private local address, by using the negation operator ("!"):

   # iptables -A OUTPUT -d ! -j DROP

Which will look like this:

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DROP       all  --           !   

This method could be use for testing monitoring, to force an automated cluster failover, or to eliminate extra traffic when tracing a bug.

The iptables command can be used as a powerful tool for exercising very fine-grained control over your host networking. A site-wide network catastrophe can effectively be locally simulated with one simple trouble-free command. Write it down.

Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.

May 062012
Data security is a necessary part of Linux systems administration. The primary focus tends to be on guarding against external penetration, then on securing systems from internal breach by nefarious workers. One facet which is frequently neglected is how to safeguard residual data after systems are decommissioned. The problem is, the rm command just isn’t sufficient. If you want your data destroyed beyond recovering, then shred is the tool to use.

Shredded Documents of US Embassy

Hard disks are essentially just spinning platters of magnetised rusty iron. Data is represented by a vast series of minuscule charged patterns. There’s probably more to it than this. I’ll leave the physics as a research exercise for the reader.

On top of the raw hard disk drive, conceptually, is the filesystem. In general, file data is stored in contiguous blocks, indexed by a number known as an inode. The inode is listed within the directory object. So when a file is deleted, this inode is simply removed from the directory index, but the file data itself still exists untouched, and orphaned as it were, and will continue to exist until it is overwritten later as the space is reused.

The problem here, of course, is that rm’ing a file doesn’t actually delete it. The data contents still exists. The only way to indelibly delete a file is to overwrite it. Even then, it’s possible with specialised forensic software to recover overwritten data. So to securely destroy a file, it must be overwritten again, and again, preferably with obnoxious junk data. The little-used shred utility does just this.

Shred Example

The command itself is simple and straightforward. To destroy a file, specify the number of wipes to perform, and the name of the file. You can also view the progress of the command with the “verbose” switch:

  # shred -u -n 30 -v customer_data.txt

If the “-u” option is omitted, the file will remain, but will be overwritten with gibberish. If the “-n” option is dropped, then the default is to make 25 passes.

You can even destroy the filesystem itself, and shred the raw disk device. This would be handy if you were decommissioning old systems, upgrading disks, or even if you were selling an old hard drive on eBay. If so, you’d use a command like this:

  # shred -u -n 20 /dev/sda

This command is so serious that you’d actually need to reformat the disk afterwards in order to render it usable again.

It should go without saying that the commands illustrated in this article are as destructive as it’s possible to be without a bucket of salt water. I accept no responsibility for accidental shredding.

Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.