Not Really a Blog

November 25, 2009

Tired of remembering complicated passwords?

Filed under: Computers — Tags: , , , , — jesus @ 16:41

My colleague Mat showed me a nice little trick today. Say you are bored or tired of using different password in all your websites you use, or maybe you don’t want to use the same password on these sites. Simple and easy if you use a Linux computer. You only have to use, and more importantly remember, simple words or combination and a little command in the shell to generate a really difficult to guess password.

Say you want to access your Google account and you want to use google as the password. So I’ll go and use the SHA hash algorithm to generate an interesting password like:


% echo "google" | sha1sum
d662cc72cfed7244a88a7360add85d5627b9cd6c  -

Or just go and use your hash algorithm of choice. And if it’s too long for the website you want to use, get the 10 first characters or something.


% echo "google" | sha1sum | cut -c1-10
d662cc72cf

And that’s it, copy and paste and you are done. Yes, it’s a bit of a faff, but, well, you want something secure, don’t you?

Simply brilliant. Thanks Mat.

February 10, 2008

Tricks to diagnose processes blocked on strong I/O in linux

Filed under: Linux, System Administration — Tags: , , — jesus @ 23:59

There’s one aspect of the Linux kernel and the GNU operating system and related tools in which it might be lacking behind, specially with kernel 2.4 series. I’m talking about I/O accounting or how to know what’s going on with the hard disk or other devices which are used to write and read data.

The thing is that Linux provides you with a few tools with which you can tell what’s going on with the box and its set of disks. Say vmstat provides you with a lot of information and various other files scattered in the /proc filesystem. But that information only tells us about the system globally, so it’s good for diagnosing if a high load on a box is due to some process chewing CPU cycles away or because of the hard disk being hammered and being painfully slow. But what about if you want to know what exactly is going on, which process or processes are responsible for the situation, how do you know? The answer is that Linux doesn’t provide you with tools for that, as far as I know (If you know of any, please leave a comment). There’s no such thing as a top utility for process I/O accounting. The situation is better in Linux 2.6 provided you activate the taskstats accounting module with which you can query information about the processes. The user-space utilities are somewhat scarce, but at least there’s something with which you can start playing.

However there are some tricks you can use to try to find out which process is the culprit when things go wrong. As usually, many of these tricks come from work where I keep learning from my colleagues, who, by the way, are much more intelligent than I am ;-), when things go wrong and some problem arises that needs immediate action.

So, let’s define the typical scenario on which we could apply these tricks. You’ve got a Linux box which has a high load average. Say 15, 20, etc. As you may know, the load average measures the number of processes that are waiting to be executed on the process queue. That doesn’t necessarily mean that the CPU is loaded when, for example, processes are blocked because of I/O, say a read from the disk because this is slow or something. The CPU would just sit there most of the time being idle. This number makes sense when you know the number of CPU the box has. If you have a loadavg of 2 in a two-CPU box, then you are just fine, ideally.

The number one tool for identifying what’s going on is vmstat, which would tell you a lot of things going on in the box, specially when you execute it periodically as vmstat 1. If you read the man page (and I do recommend you to read it), you can get an idea of all the information what would be going through the screen :-). Click here to see a screen shot of the output of vmstat on 4 different boxes. Almost all of its output is useful for diagnosis, except the last column in the case of a Linux 2.4 box (that value is added to the idle column).

With this tool we can find out if the system is busy on I/O and how. For example by looking at the bo and bi columns. Swapping, when it’s happening, could also imply that the hard disk is being hammered but that would also mean that there’s not enough memory in the system for all the processes running at that very moment. Well, all of its output can be useful for identifying what’s going on.

Ok, so back to our problem, how do we start? Well, the first thing to do is to try to find out what’s on execution that could be causing this. Who are the usual suspects. By looking at ps output we could get an idea of which processes and/or application could be causing the disk I/O. The problem with this is that sometimes an application runs tens or hundreds of processes, each of which is serving a remote client (say Apache prefork) and maybe only some of them are causing the havoc, so killing all possible processes is not an option in a production environment (possibly killing the processes causing the problem is feasible because they might be wedged or something).

Finding the suspects

One way to find what processes are causing the writes and reads is to have a look at the processes in uninterruptible sleep state. As the box has a high load average because of I/O, surely there must be processes in such a state because they are waiting for the disk to return back the data and return from their system calls. And these processes are likely to be involved in the high load of the system. If you think that uninterruptible sleep processes cannot be killed you are right, but we are assuming that they are in this state briefly again and again because of reading and writing to the disk non-stop. If you have read the vmstat man page, you must have noticed that the column b tells us the number of processes in such a state.

golan@kore:~$ ps aux | grep " D"
root     27351  2.9  0.2 11992 9160 ?        DN   23:06   0:08 /usr/bin/python /usr/bin/rdiff-backup -v 5 --restrict-read-only /disk --server
mail     28652  0.5  0.0  4948 1840 ?        D    23:11   0:00 exim -bd -q5m
golan    28670  0.0  0.0  2684  804 pts/23   S+   23:11   0:00 grep --color  D

Here we can see two processes in such a state (noted by D on ps output). Normally we don’t get to see many of these at the same time and if we issue the same command again, we are probably not going to see it again unless there is a problem which is why I’m writing this in the first place :-).

Examining the suspects

Well, we now need to examine the suspects and filter them, because there might be perfectly valid processes that are in uninterruptible sleep state but are not responsible of the high load, so we need to find out. One thing that we could do is attach strace to a specific process and see how it’s doing. This can be easily achieved this way:

golan@kore:~$ strace -p 12766
Process 12766 attached - interrupt to quit
write(1, "y\n", 2)                      = 2

Here we see the output of a process executing yes. So, what does this output tell us? It shows us all the system calls that the process is doing, so we can effectively see if it is reading or writing.

But all this can be very time consuming if we have quite a few processes to examine. What we could do is strace all of them and save their output to different files and then check them later:

If what we are examining is a process called command, we could do it this way:

# mkdir /tmp/strace
# cd /tmp/strace
# for i in `ps axuwf | grep command | awk '{ print $2 }'`; do (strace -p $i > command-$i.strace 2>&1)&  done

What this would do is create a series of files called command-PID-strace, one for each of the processes that match the regular expression in the grep command. If we set this running for a while, we can now examine the contents of all the files. Even better if we display the files ordered by size we would have a pointer to the process that are doing the most system calls. All we would need to do is verify that those system calls are actually read and write system calls. And also, don’t forget to kill all the strace processes that we sent to the background by issuing a killall strace :-)

So now we have a list of processes that are causing lots of reads and writes in the hard disk. What to do next depends on the situation and what you want to do. You might want to kill the processes, or find who (the person) who started them, in case they were started by someone. Or which network connection, IP address, etc. There are a bunch of utilities that you can use including strace, netstat, lsof, etc. It’s up to you what to do next.

And…

Well, This is me learning from my colleagues and from problems that arise when you don’t expect them. My understanding of the Linux kernel is not that good, but now many of the things that I studied in the Operating System class start to make a little bit more sense. So please, if you have experience with this, know of other ways to get this kind of information, please share it with me (as a comment or otherwise). I’m still learning :)

December 4, 2006

Speeding up trac’s response time

Filed under: Internet, Programming — Tags: , , , , — jesus @ 18:30

I’ve been trying to speed up an installation of trac over the last few days. The web interface took ages to display each of the directories or files within the subversion repository. But this one wasn’t too big. The only change to the subversion repository is that we started using a vendor branch imported into our main repository using svm

So, after a few hours trying different solutions, and reading trac’s source code, I think I got where the bottleneck was.Well, it was http://www.sqlite.org/download.html which was causing the bottleneck. Trac uses an object CachedRepository to access the repositories. Whenever we want to get the chagesets, a function to synchronize the repository is called:

class CachedRepository(Repository):
  def get_changeset(self, rev):
    if not self.synced:
      self.sync()
      self.synced = 1
      return CachedChangeset(self.repos.normalize_rev(rev), self.db, self.authz)

and such method, sync(), makes a call to:

youngest_stored = self.repos.get_youngest_rev_in_cache(self.db)

which is all this:

def get_youngest_rev_in_cache(self, db):
    """Get the latest stored revision by sorting the revision strings
    numerically
    """
    cursor = db.cursor()
    cursor.execute("SELECT rev FROM revision ORDER BY -LENGTH(rev), rev DESC LIMIT 1")
    row = cursor.fetchone()
    return row and row[0] or None

And that SQL query was taking around 1-2 seconds each time it was executed. It happened that we were running an old version of sqlite and pysqlite, so a ./cofigure && make && make install using the recommended installation saved my day :-)

Hope it is useful to anybody if it gets indexed by Google.

November 30, 2006

Setting up a subversion mirror repository using svnsync

Filed under: Internet, Programming — Tags: , , , — jesus @ 01:48

With the new version of subversion 1.4 you have a new tool called svnsync with which you can maintain mirror repositories quite easily. I’ve been working on one at work and would like to share with you my findings in case it interests anyone :-)

Understanding the problem

In order to have a mirror repository, it is important that commits only happen on the master and then they are synchronized to the mirror using the svnsync program. Then, the mirror repository can be used for whatever you may think of but for committing: backup, high speed checkouts, web front-ends, etc.

So, svnsync must be the only “one” able to commit to the repository. If we use the apache integration, there are various ways to do this. Let’s say we are using svn+ssh for authentication, in which case it is more complicated as ssh access usually grants writing access to the file system. So creating a different user is going to be handy.

Creating and populating the repository

So, let’s say that we created a user called svnsync on the target machine and that we are going to create a new subversion repository in its home directory:

svnadmin create /home/svnsync/svn

Now, we need to set up a hook to let svnsync change the properties. For this, we create /home/svnsync/svn/hooks/pre-revprop-change with:

#!/bin/sh
USER="$3"

if [ "$USER" = "svnsync" ]; then exit 0; fi

echo "Only the svnsync user can change revprops" >&2
exit 1

We will grant access to the user running svnsync on the main machine by copying its ssh key to .ssh/authorized_keys. And now, we only need to initialize the remote repository. Note that we can run svnsync from where ever we want, but for the sake of simplicity, we will run it on the main machine, where the original repository resides.

$ svnsync init --username svnsync \
      svn+ssh://svnsync@remote/home/svnsync/svn \
      file:///source-svn

Note:

  • The syntax is
    svnsync init DEST SOURCE

    That’s it, the destination repository goes before the source repository.

  • There is no “:” between the host name and the path to the remote repository.

With this command, we will have initialized the destination repository and now we are ready to populate the destination repository. We can do it with this command:

svnsync synchronize --username svnsync \
       svn+ssh://svnsync@remote/home/svnsync/svn

And, as we already initialized the repository, there is no need to specify the source repository. This will take more or less time depending on how big you repository is and how fast your network connection is. Hopefully it will have finished after you take a coffee :-)

Creating another user to let users access the repository

So, we will now create an user called svn which will be used to access the repository using the subversion client. As we are using svn+ssh, all we need is to grant access to such user to all the users that have access to the main repository. If we are using ssh keys it’s as easy as copying all the allowed keys to the /home/svn/.ssh/authorized_keys file.
Also, if we change the permissions to the repository at /home/svnsync/svn (and its parent) to be something like

drwxr-xr-x  7 svnsync users 4096 2006-11-28 17:30 svn/

we will let svnsync (and the svnsync user) be the owner and have write permissions to the repository and let svn (and all the users ssh’ing) have read access only to the repository (provided both belong to the users group).

$ svn co svn+ssh://svn@remote/home/svnsync/svn/test
A    test/trunk
[...]
A    test/trunk/readme
Checked out revision 2.
$ echo "test" >> test/trunk/readme
$ cd test/
$ svn -m "test commit" ci
Sending        trunk/readme
Transmitting file data .svn: Commit failed (details follow):
svn: Can't create directory '/home/svnsync/svn/db/transactions/2-1.txn':
Permission denied

And that’s all.

Commiting to the master respository

In case you want to commit back to the master respository, you need to do a “svn switch –relocate” to point to the master repository, but for that to work, it needs to have the same UUID if we don’t want it to fail.

  1. To get the UUID on the main machine:
    svnadmin dump -r0 /source-svn | head -n 3 > saved-uuid
    
  2. Copy the file saved-uuid to the remote machine and do a
    svnadmin load --force-uuid /home/svnsynd/svn < saved-uuid
    

So, thins to take into account:

  1. When populating the repository, we use the svnsync user who has write permissions to the repository (svn+ssh://svnsync@…)
  2. When checking out, we use the svn user (svn+ssh://svn@…)
  3. Automatization

    If we want to keep both repositories synchronized, we need to set up a couple of hooks on the source repository.

    post-commit

    Add this to such hook (it needs to be executable and not have the .tmpl extension)

    # Propagate the data to the remote repository
    /usr/local/bin/svnsync synchronize --username svnsync
            svn+ssh://svnsync@remote/home/svnsync/svn &
    

    post-rev-changes

    We also want the properties changes to be present at the remote repository:

    # Propagating changes to the remote repository.
    /usr/local/bin/svnsync copy-revprops --username svnsync
           svn+ssh://svnsync@remote/home/svnsync/svn $REV  &
    

    Note that we put them to the background to let the user go on with what he was doing.

    Final notes

    This is a quick guide on how to set things to have a remote repository. There is much more than this and I recommend you to read the documentation and, obviously, do a backup. Doing a “svnadmin dump” only takes a while a it´s really worth it.

    In any case, just let me know if you find any errors or typos.

The Shocking Blue Green Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 2,869 other followers