linux

May 062012
 
Data security is a necessary part of Linux systems administration. The primary focus tends to be on guarding against external penetration, then on securing systems from internal breach by nefarious workers. One facet which is frequently neglected is how to safeguard residual data after systems are decommissioned. The problem is, the rm command just isn’t sufficient. If you want your data destroyed beyond recovering, then shred is the tool to use.

Shredded Documents of US Embassy

Hard disks are essentially just spinning platters of magnetised rusty iron. Data is represented by a vast series of minuscule charged patterns. There’s probably more to it than this. I’ll leave the physics as a research exercise for the reader.

On top of the raw hard disk drive, conceptually, is the filesystem. In general, file data is stored in contiguous blocks, indexed by a number known as an inode. The inode is listed within the directory object. So when a file is deleted, this inode is simply removed from the directory index, but the file data itself still exists untouched, and orphaned as it were, and will continue to exist until it is overwritten later as the space is reused.

The problem here, of course, is that rm’ing a file doesn’t actually delete it. The data contents still exists. The only way to indelibly delete a file is to overwrite it. Even then, it’s possible with specialised forensic software to recover overwritten data. So to securely destroy a file, it must be overwritten again, and again, preferably with obnoxious junk data. The little-used shred utility does just this.

Shred Example

The command itself is simple and straightforward. To destroy a file, specify the number of wipes to perform, and the name of the file. You can also view the progress of the command with the “verbose” switch:

  # shred -u -n 30 -v customer_data.txt

If the “-u” option is omitted, the file will remain, but will be overwritten with gibberish. If the “-n” option is dropped, then the default is to make 25 passes.

You can even destroy the filesystem itself, and shred the raw disk device. This would be handy if you were decommissioning old systems, upgrading disks, or even if you were selling an old hard drive on eBay. If so, you’d use a command like this:

  # shred -u -n 20 /dev/sda

This command is so serious that you’d actually need to reformat the disk afterwards in order to render it usable again.

It should go without saying that the commands illustrated in this article are as destructive as it’s possible to be without a bucket of salt water. I accept no responsibility for accidental shredding.


Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.

May 012012
 

Laziness is one of the hallmarks of a good sysadmin. The Father of Perl, Larry Wall says so. An economy of keystrokes is the mark of a keen mind that needs to get as much done as possible.

From time to time, one needs to change a single piece of text in a lot of files, and no one wants to manually open, edit and save each file individually with vi at the command line. An example of this would be the cloning of a set of application configuration files from one host to another. But repeatedly changing a host-specific string is mind-numbing.

There are several ways make repeated updates easily and safely, and they’re worth committing to memory in order to add them to your sysadmin toolbox. The best way, I believe, is with Perl.

  # perl -pe -i.bak 's/this/that/g' filename

This will edit the file “filename” and replace all occurrences of the text “this” with the text “that”. Furthermore, it will back up “filename” by copying the original and appending the extension “.bak” to “filename.bak”.

The backup extension works even if you specify the “*” operator, and will search & replace and backup every file in the directory. It does make a backup copy of every file, whether it edits it or not, so this will double the number of files in a directory.

So if you wanted to economically perform a find-replace recursively on every file under a directory, and all its subdirectories, something like this would suffice. Although use with caution:

  # cd DIR
  # perl -i.bak -pe 's/foo/bar/' `grep -lr foo *`

The grep returns a list of files (with full paths) which contain the pattern “foo”, so the script limits itself to matching files only, rather unnecessarily littering your filesystem with extraneous backup copies.

Commit this one to your mental toolbox. You’ll always have something better to do than editing swags of files manually.


Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.

Mar 192012
 

I’m really lazy, a quality that I firmly believe to be the hallmark of a good sysadmin. This is borne out by Larry Wall’s (Perl creator) “Virtues of a Programmer”.

That’s what I told them at my last exit interview, anyway.

So as part of this, you’ve got to learn how to crunch the bash command line as quickly as possible. It allows you to work faster, type less, and get back to doing something else. These are, I believe the ones I use the most, and which have become second nature.

Search history backwards Ctrl-R
Line kill Ctrl-U
Word kill Ctrl-W
Clear screen Ctrl-L
Clear to end of line Ctrl-K
Clear to end of word Ctrl-D
Yank back kill Ctrl-Y
Forward word Esc-F
Back word Esc-B

Use these and intimidate lesser sysadmins with your bash power.


Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.

Feb 282012
 

Sometimes, for no discernable reason, a command fails.  In the absence of a log file, a decent debug level, or a helpful blog discovered via Google, you’re floundering, and in theory could be stuck with a simple blocker all day.  Often this can be a simple cryptic error message of “File not found”, and you smash your keyboard impotently wondering which file exactly the process is looking for.  In these cases, a debugger may often be helpful.

But for the non-kernel hacking fashionably dressed Unix user, a dive into the stack trace and a peek behind the curtain of system calls just increases the confusion, and so this line of investigation is often neglected.  A huge majority of problems can be solved with a simply couple of options to the “strace” command.

Supposing with the “File not found” error, if you knew what the process expected, you’d be fine. Or suppose you wanted to know what environment scripts a command was implicitly loading, and in what order.  The “strace” command can be filtered to show file open system calls only, like so:

# strace -f -e open -o /tmp/strace.log <command>

What you’ll get in the /tmp/strace.log file is a list of all attempts by the process- successful or otherwise – at opening files.  This can be helpful in locating a log file being output to, or a library file being read from.

For even more information, try “-e open,close,read,write”.  But “open” tends to be just the ticket for the majority of these kinds of problems.


Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.