bash

Nov 012012
 

I’m going to begin by saying you should never ever do what I’m about to describe, as it’s bad practice, sloppy and dangerous. But then, so is working late on a Friday night, so you might as well choose the lesser of two evils.

As a sysadmin my overriding concern is to do as little work as possible, as quickly as possible. If that’s at all possible. And by work, I mean those repetitive and tedious tasks. There is probably nothing worse than performing the same command, or series of commands, over and over again on multiple hosts. It’s the adult equivalent of writing out lines on a blackboard, and unfortunately this seems to crop up more often than it should.
StateLibQld 1 102016 Interior of Brisbane Technical College Signwriting class, ca. 1900

The best practice way of addressing multiple updates is by implementing some form of configuration management system. This kind of software allows the administrator to centrally manage the files, packages and patches of all their hosts from a single central point. Puppet, Chef, Spacewalk and Satellite are all excellent products that approach this problem from different angles, with each enjoying differing degrees of favour among different factions of the Linux community.

But there are some times when either these tools aren’t complete, or because you’re just tired and desperate, and a quicker, dirtier solution is what’s required. I’m going to describe the use of such a tool. One that will allow you to execute commands simultaneously in shells on multiple hosts. It’s potential for wholesale destruction is enormous, and I can’t recommend you ever use it, but once it’s in your toolbox, you will.

So what if rather than invoking commands from an SSH shell on hosts one after the other – that is, in serial – you could type the command once and have it execute simultaneously on all hosts – that is, in parallel? There are two tools that I know of that perform this task. One is called ClusterSSH, and the other is MultiSSH(or, mssh). ClusterSSH, or cssh is available from the EPEL repository for Fedora/CentOS/RedHat, and mssh can be obtained from the Ubuntu Software Manager. Both can be downloaded from Sourceforge. For the purposes of this post, I’ll be discussing mssh but everything applies equally to ClusterSSH as well. Their command line arguments and behaviour are identical.

Install the software, in the form that your distribution expects, or however you’re comfortable. Then, invoke mssh by passing it a list of hosts to connect to:

   # mssh perfdb01 perfdb02 perfdb03 perfdb04

Boom!

And “Boom!”, you’ll have a window subdivided into four separate shells – one for each host – and all operating in unison. Type a command in the top strip and it appears at each command line. Click on an individual pane and you’ll be typing in just that one shell. You can also click on the Serversmenu and you can unselect hosts to disable input to them.

The first thing you’ll get is a login prompt, if you haven’t distributed your public SSH keys, and if your password is that same on each host, or you’re using LDAP, logging in simultaneously is a breeze.

Now, you can sudo to whatever account you need, and hack away to your heart’s content. But remember, “measure twice, cut once”. It’s easy enough to think all your shells have the same current working directory at any particular time, but they may not. The possibilities for total system destruction are enormous, so for Pete’s sake, be careful. I’ll leave example use as an exercise for the reader.

Cautionary tricks when using MultiSSH

Here’s a few tricks that I’ve picked up that can increase the power of MultiSSH.

Staggered execution timing

One of the difficulties of simultaneous execution is that if the command you’ve invoked is accessing a common resource, say a local YUM repository, then some of the commands will fail due to heavy IO or CPU. So I like to add a bit of a random pause before my command:

# sleep $(( $RANDOM/1000 )); yum -y update

Since the $RANDOM variable returns a random number of milliseconds, but sleep takes seconds as an argument, and I don’t want to wait hours.

Prevention of wholesale deletion

There’s a cute way to stop the famous "rm -rf *". You can probably do it with Selinux, and probably should read the man page to work out how to, but if you’ve turned Selinux off in mystified frustration, this is a quick hack to counter it. All you need to do is create a file name -i that is undeletable. This will make any expansion of * in an "rm -rf *" be expanded to: "rm -rf -i *", forcing the command into interactive mode and putting a halt to things. How do you create a file that’s even undeletable by root? If you’re using an “ext” filesystem, you can use the "chattr" command, like this:

   # cd /
   # touch ./-i
   # chattr +i ./-i

This will create a file called "-i" that the filesystem itself (rather than the inode permissions) will prevent you from deleting:

  # rm -f ./-i
rm: cannot remove `-i': Operation not permitted

Read the chattr man page to find out how to undo the attribute change and delete the file..

So that’s a quick rundown of MultiSSH. With the time that you’ll gain by using this tool, do some research on Puppet or Chef and implement one of those to manage all your files, packages and configurations across your entire estate. That’s a better practice. But in the mean time, this quick and dirty tool could really help you out.


Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.

Jun 112012
 

As a sysadmin, I like to get things done as quickly as possible – that way I can start doing more things. I hate waiting for something to occur, particularly when this means having to constantly context-switch my attention back to the host, checking and rechecking a file or command.

An example of hits is the time I was waiting for a DNS change to propagate downwards to a local server. I got to thinking, ‘why should I constantly type “dig” every 10 minutes? This computer should do the work for me’.

Being inclined to do as little as possible, I wanted an easy single-line shell command – not an entire script. Here’s what I came up with; a piece of bash that would regularly run a command until a given outcome, and then exit and email me.

In this example, I wanted to be emailed when the IP address of www.example.com became “10.10.10.10” in DNS (obviously these examples have been sanitised of all real world corporate information).

Here it is in one line (although separated with line breaks to show the logical components). It’s also nohup’ed for resilience.

# nohup until \
  dig www.example.com|grep -A1 "ANSWER SECTION"|tail -1|grep 10.10.10.10; \
  do sleep 300; done | mail -s "IP Changed" matt@email-address.com &

So this will simply query the IP information with dig, find the line in the ANSWER SECTION and compare it to the expected new IP address. If it exists, the until loop will exit and email me. If the pattern isn’t returned, the loop will sleep for five minutes (300 seconds) before running again.

I setted and forgetted this script and went about my other tasks. About 12 hours later I got an email saying that the IP address had changed. I didn’t have to make any periodic checks.

This sort of thing is a great technique to get your head around. Utilising tricks like this can hugely increase your productivity by taking the drudgery out of your hands and allow you to do get on to more important thinking-heavy jobs.

Any improvements to the command, particularly in how to make it shorter? If so, I’d love to hear about it in the comments.


Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.

May 172012
 

I’m sure you’ve done this before. You kick off some process in the background that you know will take a while, but then you realise that you need to logout, and that if you do, then the process will die when the shell exits. What to do? Surely you’ll not have to start again?

So you’ve done this:

   # rsync -a host1:/u01/data/ host2:/u02 &

What’s that? You forgot to background it and type the “&” ? Just type “Ctrl-Z” and then type bg – now it’s in the background.

So you’ve realised that you’ve kicked off a process without nohup and that you’ll need to logout. But you don’t want to lose the work the process has done, and don’t want to start it again. You wish you’d thought to run it with nohup in the first place, like this:

  # nohup rsync -a host1:/u01/data/ host2:/u02 &
[2] 4885
nohup: ignoring input and appending output to `nohup.out'

Is it too late? Do you really need to kill the backgrounded process and start it again from the beginning when it’s already come so far?

Not on your life, because there is a little bash builtin tucked away at the back of the man page that’s easy to miss – disown

All you need to do is execute disown with the job number as an argument. This job control procedure goes something like this (the number “[1]” is the job number):

  # jobs
[1]+ Running       rsync -a host1:/u01/data/ host2:/u02 &
  # disown %1

This will detach the detach the process with job number 1 – the rsync – from the running shell and prevent it from being orphaned when the shell is exited. The rsync will continue to run happily on its own while you’re off in the pub. Problem solved.

You’re welcome.


Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.

May 062012
 
Data security is a necessary part of Linux systems administration. The primary focus tends to be on guarding against external penetration, then on securing systems from internal breach by nefarious workers. One facet which is frequently neglected is how to safeguard residual data after systems are decommissioned. The problem is, the rm command just isn’t sufficient. If you want your data destroyed beyond recovering, then shred is the tool to use.

Shredded Documents of US Embassy

Hard disks are essentially just spinning platters of magnetised rusty iron. Data is represented by a vast series of minuscule charged patterns. There’s probably more to it than this. I’ll leave the physics as a research exercise for the reader.

On top of the raw hard disk drive, conceptually, is the filesystem. In general, file data is stored in contiguous blocks, indexed by a number known as an inode. The inode is listed within the directory object. So when a file is deleted, this inode is simply removed from the directory index, but the file data itself still exists untouched, and orphaned as it were, and will continue to exist until it is overwritten later as the space is reused.

The problem here, of course, is that rm’ing a file doesn’t actually delete it. The data contents still exists. The only way to indelibly delete a file is to overwrite it. Even then, it’s possible with specialised forensic software to recover overwritten data. So to securely destroy a file, it must be overwritten again, and again, preferably with obnoxious junk data. The little-used shred utility does just this.

Shred Example

The command itself is simple and straightforward. To destroy a file, specify the number of wipes to perform, and the name of the file. You can also view the progress of the command with the “verbose” switch:

  # shred -u -n 30 -v customer_data.txt

If the “-u” option is omitted, the file will remain, but will be overwritten with gibberish. If the “-n” option is dropped, then the default is to make 25 passes.

You can even destroy the filesystem itself, and shred the raw disk device. This would be handy if you were decommissioning old systems, upgrading disks, or even if you were selling an old hard drive on eBay. If so, you’d use a command like this:

  # shred -u -n 20 /dev/sda

This command is so serious that you’d actually need to reformat the disk afterwards in order to render it usable again.

It should go without saying that the commands illustrated in this article are as destructive as it’s possible to be without a bucket of salt water. I accept no responsibility for accidental shredding.


Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.