Nov 012012
 

I’m going to begin by saying you should never ever do what I’m about to describe, as it’s bad practice, sloppy and dangerous. But then, so is working late on a Friday night, so you might as well choose the lesser of two evils.

As a sysadmin my overriding concern is to do as little work as possible, as quickly as possible. If that’s at all possible. And by work, I mean those repetitive and tedious tasks. There is probably nothing worse than performing the same command, or series of commands, over and over again on multiple hosts. It’s the adult equivalent of writing out lines on a blackboard, and unfortunately this seems to crop up more often than it should.
StateLibQld 1 102016 Interior of Brisbane Technical College Signwriting class, ca. 1900

The best practice way of addressing multiple updates is by implementing some form of configuration management system. This kind of software allows the administrator to centrally manage the files, packages and patches of all their hosts from a single central point. Puppet, Chef, Spacewalk and Satellite are all excellent products that approach this problem from different angles, with each enjoying differing degrees of favour among different factions of the Linux community.

But there are some times when either these tools aren’t complete, or because you’re just tired and desperate, and a quicker, dirtier solution is what’s required. I’m going to describe the use of such a tool. One that will allow you to execute commands simultaneously in shells on multiple hosts. It’s potential for wholesale destruction is enormous, and I can’t recommend you ever use it, but once it’s in your toolbox, you will.

So what if rather than invoking commands from an SSH shell on hosts one after the other – that is, in serial – you could type the command once and have it execute simultaneously on all hosts – that is, in parallel? There are two tools that I know of that perform this task. One is called ClusterSSH, and the other is MultiSSH(or, mssh). ClusterSSH, or cssh is available from the EPEL repository for Fedora/CentOS/RedHat, and mssh can be obtained from the Ubuntu Software Manager. Both can be downloaded from Sourceforge. For the purposes of this post, I’ll be discussing mssh but everything applies equally to ClusterSSH as well. Their command line arguments and behaviour are identical.

Install the software, in the form that your distribution expects, or however you’re comfortable. Then, invoke mssh by passing it a list of hosts to connect to:

   # mssh perfdb01 perfdb02 perfdb03 perfdb04

Boom!

And “Boom!”, you’ll have a window subdivided into four separate shells – one for each host – and all operating in unison. Type a command in the top strip and it appears at each command line. Click on an individual pane and you’ll be typing in just that one shell. You can also click on the Serversmenu and you can unselect hosts to disable input to them.

The first thing you’ll get is a login prompt, if you haven’t distributed your public SSH keys, and if your password is that same on each host, or you’re using LDAP, logging in simultaneously is a breeze.

Now, you can sudo to whatever account you need, and hack away to your heart’s content. But remember, “measure twice, cut once”. It’s easy enough to think all your shells have the same current working directory at any particular time, but they may not. The possibilities for total system destruction are enormous, so for Pete’s sake, be careful. I’ll leave example use as an exercise for the reader.

Cautionary tricks when using MultiSSH

Here’s a few tricks that I’ve picked up that can increase the power of MultiSSH.

Staggered execution timing

One of the difficulties of simultaneous execution is that if the command you’ve invoked is accessing a common resource, say a local YUM repository, then some of the commands will fail due to heavy IO or CPU. So I like to add a bit of a random pause before my command:

# sleep $(( $RANDOM/1000 )); yum -y update

Since the $RANDOM variable returns a random number of milliseconds, but sleep takes seconds as an argument, and I don’t want to wait hours.

Prevention of wholesale deletion

There’s a cute way to stop the famous "rm -rf *". You can probably do it with Selinux, and probably should read the man page to work out how to, but if you’ve turned Selinux off in mystified frustration, this is a quick hack to counter it. All you need to do is create a file name -i that is undeletable. This will make any expansion of * in an "rm -rf *" be expanded to: "rm -rf -i *", forcing the command into interactive mode and putting a halt to things. How do you create a file that’s even undeletable by root? If you’re using an “ext” filesystem, you can use the "chattr" command, like this:

   # cd /
   # touch ./-i
   # chattr +i ./-i

This will create a file called "-i" that the filesystem itself (rather than the inode permissions) will prevent you from deleting:

  # rm -f ./-i
rm: cannot remove `-i': Operation not permitted

Read the chattr man page to find out how to undo the attribute change and delete the file..

So that’s a quick rundown of MultiSSH. With the time that you’ll gain by using this tool, do some research on Puppet or Chef and implement one of those to manage all your files, packages and configurations across your entire estate. That’s a better practice. But in the mean time, this quick and dirty tool could really help you out.


Matt Parsons is a freelance Linux specialist who has designed, built and supported Unix and Linux systems in the finance, telecommunications and media industries.

He lives and works in London.

  One Response to “Being everywhere at once: the MultiSSH tool”

  1. Hi Guys,

    also try “terminator” (available with yum). works really great and have the broadcast feature enabling you to broadcast command to multiple session at the same time..

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>