Not Really a Blog

November 25, 2009

Tired of remembering complicated passwords?

Filed under: Computers — Tags: , , , , — jesus @ 16:41

My colleague Mat showed me a nice little trick today. Say you are bored or tired of using different password in all your websites you use, or maybe you don’t want to use the same password on these sites. Simple and easy if you use a Linux computer. You only have to use, and more importantly remember, simple words or combination and a little command in the shell to generate a really difficult to guess password.

Say you want to access your Google account and you want to use google as the password. So I’ll go and use the SHA hash algorithm to generate an interesting password like:


% echo "google" | sha1sum
d662cc72cfed7244a88a7360add85d5627b9cd6c  -

Or just go and use your hash algorithm of choice. And if it’s too long for the website you want to use, get the 10 first characters or something.


% echo "google" | sha1sum | cut -c1-10
d662cc72cf

And that’s it, copy and paste and you are done. Yes, it’s a bit of a faff, but, well, you want something secure, don’t you?

Simply brilliant. Thanks Mat.

February 10, 2008

Tricks to diagnose processes blocked on strong I/O in linux

Filed under: Linux, System Administration — Tags: , , — jesus @ 23:59

There’s one aspect of the Linux kernel and the GNU operating system and related tools in which it might be lacking behind, specially with kernel 2.4 series. I’m talking about I/O accounting or how to know what’s going on with the hard disk or other devices which are used to write and read data.

The thing is that Linux provides you with a few tools with which you can tell what’s going on with the box and its set of disks. Say vmstat provides you with a lot of information and various other files scattered in the /proc filesystem. But that information only tells us about the system globally, so it’s good for diagnosing if a high load on a box is due to some process chewing CPU cycles away or because of the hard disk being hammered and being painfully slow. But what about if you want to know what exactly is going on, which process or processes are responsible for the situation, how do you know? The answer is that Linux doesn’t provide you with tools for that, as far as I know (If you know of any, please leave a comment). There’s no such thing as a top utility for process I/O accounting. The situation is better in Linux 2.6 provided you activate the taskstats accounting module with which you can query information about the processes. The user-space utilities are somewhat scarce, but at least there’s something with which you can start playing.

However there are some tricks you can use to try to find out which process is the culprit when things go wrong. As usually, many of these tricks come from work where I keep learning from my colleagues, who, by the way, are much more intelligent than I am ;-), when things go wrong and some problem arises that needs immediate action.

So, let’s define the typical scenario on which we could apply these tricks. You’ve got a Linux box which has a high load average. Say 15, 20, etc. As you may know, the load average measures the number of processes that are waiting to be executed on the process queue. That doesn’t necessarily mean that the CPU is loaded when, for example, processes are blocked because of I/O, say a read from the disk because this is slow or something. The CPU would just sit there most of the time being idle. This number makes sense when you know the number of CPU the box has. If you have a loadavg of 2 in a two-CPU box, then you are just fine, ideally.

The number one tool for identifying what’s going on is vmstat, which would tell you a lot of things going on in the box, specially when you execute it periodically as vmstat 1. If you read the man page (and I do recommend you to read it), you can get an idea of all the information what would be going through the screen :-). Click here to see a screen shot of the output of vmstat on 4 different boxes. Almost all of its output is useful for diagnosis, except the last column in the case of a Linux 2.4 box (that value is added to the idle column).

With this tool we can find out if the system is busy on I/O and how. For example by looking at the bo and bi columns. Swapping, when it’s happening, could also imply that the hard disk is being hammered but that would also mean that there’s not enough memory in the system for all the processes running at that very moment. Well, all of its output can be useful for identifying what’s going on.

Ok, so back to our problem, how do we start? Well, the first thing to do is to try to find out what’s on execution that could be causing this. Who are the usual suspects. By looking at ps output we could get an idea of which processes and/or application could be causing the disk I/O. The problem with this is that sometimes an application runs tens or hundreds of processes, each of which is serving a remote client (say Apache prefork) and maybe only some of them are causing the havoc, so killing all possible processes is not an option in a production environment (possibly killing the processes causing the problem is feasible because they might be wedged or something).

Finding the suspects

One way to find what processes are causing the writes and reads is to have a look at the processes in uninterruptible sleep state. As the box has a high load average because of I/O, surely there must be processes in such a state because they are waiting for the disk to return back the data and return from their system calls. And these processes are likely to be involved in the high load of the system. If you think that uninterruptible sleep processes cannot be killed you are right, but we are assuming that they are in this state briefly again and again because of reading and writing to the disk non-stop. If you have read the vmstat man page, you must have noticed that the column b tells us the number of processes in such a state.

golan@kore:~$ ps aux | grep " D"
root     27351  2.9  0.2 11992 9160 ?        DN   23:06   0:08 /usr/bin/python /usr/bin/rdiff-backup -v 5 --restrict-read-only /disk --server
mail     28652  0.5  0.0  4948 1840 ?        D    23:11   0:00 exim -bd -q5m
golan    28670  0.0  0.0  2684  804 pts/23   S+   23:11   0:00 grep --color  D

Here we can see two processes in such a state (noted by D on ps output). Normally we don’t get to see many of these at the same time and if we issue the same command again, we are probably not going to see it again unless there is a problem which is why I’m writing this in the first place :-).

Examining the suspects

Well, we now need to examine the suspects and filter them, because there might be perfectly valid processes that are in uninterruptible sleep state but are not responsible of the high load, so we need to find out. One thing that we could do is attach strace to a specific process and see how it’s doing. This can be easily achieved this way:

golan@kore:~$ strace -p 12766
Process 12766 attached - interrupt to quit
write(1, "y\n", 2)                      = 2

Here we see the output of a process executing yes. So, what does this output tell us? It shows us all the system calls that the process is doing, so we can effectively see if it is reading or writing.

But all this can be very time consuming if we have quite a few processes to examine. What we could do is strace all of them and save their output to different files and then check them later:

If what we are examining is a process called command, we could do it this way:

# mkdir /tmp/strace
# cd /tmp/strace
# for i in `ps axuwf | grep command | awk '{ print $2 }'`; do (strace -p $i > command-$i.strace 2>&1)&  done

What this would do is create a series of files called command-PID-strace, one for each of the processes that match the regular expression in the grep command. If we set this running for a while, we can now examine the contents of all the files. Even better if we display the files ordered by size we would have a pointer to the process that are doing the most system calls. All we would need to do is verify that those system calls are actually read and write system calls. And also, don’t forget to kill all the strace processes that we sent to the background by issuing a killall strace :-)

So now we have a list of processes that are causing lots of reads and writes in the hard disk. What to do next depends on the situation and what you want to do. You might want to kill the processes, or find who (the person) who started them, in case they were started by someone. Or which network connection, IP address, etc. There are a bunch of utilities that you can use including strace, netstat, lsof, etc. It’s up to you what to do next.

And…

Well, This is me learning from my colleagues and from problems that arise when you don’t expect them. My understanding of the Linux kernel is not that good, but now many of the things that I studied in the Operating System class start to make a little bit more sense. So please, if you have experience with this, know of other ways to get this kind of information, please share it with me (as a comment or otherwise). I’m still learning :)

January 21, 2008

Installing From Source, The Easy Way

Filed under: Linux, System Administration — Tags: — jesus @ 00:40

Installing software in any unix-like operating system these days has become very easy. Package managers such as dpkg, the one used by Debian or Ubuntu, take most of the hassle by dealing with all the dependencies and intricacies that modern software has nowadays. It’s just a matter of getting the package that some hard-working and/or generous developer has made and install it in our system. It’s straightforward compared with how things were a mere few years ago.

We’ve always had the possibility of installing from the software, provided we have resolved all the needed dependencies. Installing from the source can be handy and useful at times. We might want to change some options on compile time or we might want to have two versions of the same package installed on different locations for example.

The problem lies when we want to upgrade the software and we have different versions of it installed, we could end up in a very cluttered scenario, say, with files installed across the file system from different versions of it. Even more, we might not have an easy way to track down which files belong to which version, let alone uninstalling the software.

The Easy Way ® ;-)

As always there are simple solutions for complex problems. There is a nice piece of software which helps us to keep track of software packages installed from source in a clean way. It doesn’t work for all cases, but it does a pretty good job for most of them. I’m talking about epkg, The Encap Package Manager.

I’ll try to describe how it works in a not very technical or detailed way, just to get you going with it and then I’ll install it on my system so you’ll be able to see how handy it is.

Basically, all you have to do is install all software packages on a directory on /usr/local/encap, creating a directory for each of them in a package-version.revision fashion. Then we will use epkg to just create symlinks to the appropriate places, usually /usr/local.

So, say we’ve got:

/usr/local/encap/mysoft-1.1/bin/mysoft
/usr/local/encap/mysoft-1.1/lib/mysoft.so

epkg would create symlinks such as:

/usr/local/bin/mysoft  ->  /usr/local/encap/mysoft-1.1/bin/mysoft
/usr/local/lib/mysoft.so  -> /usr/local/encap/mysoft-1.1/lib/mysoft.so

and that’s it, pretty much. With more complex packages it can get more difficult, but you get the idea.

Let’s just see an example

First of all, we need to install epkg on our system. I will be using an Ubuntu 7.10 system, which, to date, doesn’t have epkg on it. So I will install it from source in the usual way, to /usr/local

root@kore:/usr/local/src# wget ftp://ftp.encap.org/pub/encap/epkg/epkg-2.3.9.tar.gz
--23:47:17--  ftp://ftp.encap.org/pub/encap/epkg/epkg-2.3.9.tar.gz
           => `epkg-2.3.9.tar.gz'
...
23:47:32 (85.66 KB/s) - `epkg-2.3.9.tar.gz' saved [237232]

root@kore:/usr/local/src# tar xfz epkg-2.3.9.tar.gz
root@kore:/usr/local/src# cd epkg-2.3.9/
root@kore:/usr/local/src/epkg-2.3.9#
root@kore:/usr/local/src/epkg-2.3.9# ./configure --prefix=/usr/local
checking for epkg... no
checking for mkencap... no
checking for Encap source directory... /usr/local/encap
checking for Encap target directory... /usr/local
checking for Encap package directory... /usr/local/encap/epkg-2.3.9
checking for gcc... gcc
...
config.status: creating epkg/Makefile
config.status: creating mkencap/Makefile
config.status: creating mkencap/mkencap_environment
config.status: creating doc/Makefile
config.status: creating config.h

As we see, we’ll install the package with its default options, pointing to to /usr/local/encap as the encap directory. Please, see the help for more options.

We install it:

root@kore:/usr/local/src/epkg-2.3.9# make && make install
...
epkg: installing package epkg-2.3.9...
  > reading Encap source directory...
  > installing package epkg-2.3.9
    !  man: not an Encap link
    > executing postinstall script
installing: /usr/local/etc/mkencap_environment
    > installation partially successful
root@kore:/usr/local/src/epkg-2.3.9#

If we have a look at /usr/local/bin and /usr/local/encap it has installed itself as an encapped package :), and now we are ready to use it with a real example.

Installing GLE

Say we wanted to install The Graphics Layout Engine, or GLE on our computer and we don’t have a binary package at hand, or we want to control it, or whatever :), let’s just do it with epkg.

  1. Get the source
    root@kore:/usr/local/src# wget http://surfnet.dl.sourceforge.net/sourceforge/glx/GLE-4.1.1-src.zip
    root@kore:/usr/local/src# unzip GLE-4.1.1-src.zip
    root@kore:/usr/local/src# cd gle4/
  2. Configure: We will be configuring the software to make it believe it is going to be installed on /usr/local but we will actually install it on /usr/local/encap/ instead, so epkg can deal with it. This is an important step, so let’s just do it by configuring it with those options and with any other that we might want to use:
    root@kore:/usr/local/src/gle4# aptitude install libpng12-dev libpng12-0 libtiff4-dev libtiff4 libjpeg62-dev libjpeg62
    root@kore:/usr/local/src/gle4# ./configure --with-qt=no --prefix=/usr/local
    ...
    root@kore:/usr/local/src/gle4# make
    ...

    As you can see, I installed some dev packages (using debian’s aptitude) because they are dependencies for GLE. After that, I configure the package without any graphical environment (based on Qt) and pointing to /usr/local. Then we compile it.

  3. Installing. Now, we will be installing it on /usr/local/encap. Bear with me now and I’ll explain what I did after doing it :)
    root@kore:/usr/local/src/gle4# make DESTDIR=/usr/local/encap/GLE-4.1.1 install
    root@kore:/usr/local/encap/GLE-4.1.1#
    root@kore:/usr/local/encap/GLE-4.1.1# mv usr/local/* .
    root@kore:/usr/local/encap/GLE-4.1.1# rm -rf usr
    root@kore:/usr/local/encap/GLE-4.1.1# ls
    bin  lib  share

    Ok, we what I’ve done is execute make install but setting the DESTDIR variable (which is supported by GLE’s Makefile) to install it on /usr/local/encap/GLE-4.1.1. But there, it is usually installed within its own “usr/local” directory, so to make it be as if it were installed on /usr/local hierarchy, we move it to the right place so now we have:

    /usr/local/encap/GLE-4.1.1/bin
    /usr/local/encap/GLE-4.1.1/lib
    /usr/local/encap/GLE-4.1.1/share

    and so on.

  4. Install it with epkg Now the final step is to call epkg to actually create the proper symlinks and that’s it:
    root@kore:/usr/local/encap/GLE-4.1.1# epkg GLE
    epkg: installing package GLE...
      > reading Encap source directory...
      > installing package GLE-4.1.1
        > installation successful
    root@kore:/usr/local/encap/GLE-4.1.1#
    root@kore:/usr/local/encap/GLE-4.1.1# gle
    GLE version 4.1.1
    Usage: gle [options] filename.gle
    More information: gle -help
    root@kore:/usr/local/encap/GLE-4.1.1#

And that’s it, really. Now, two things,

  • If we install a newer version, say 4.2.0 whenever that’s ready, we just install it on /usr/local/encap/GLE-4.2.0 as we’ve seen before, and simple calling again
    # epkg GLE

    would create the right symlinks (that’s it, “deinstall” the previous version and install the new one.

  • If we want to uninstall it, that’s it, remove the symlinks, we simply issue this command:
    # epkg -r GLE

    and that’s all.

Summary

  1. epkg lets us install software from source having control over it, ie. Installing it in a clean way, being able to deinstall it and upgrade it without cluttering the file system
  2. All you have to do is install the software on /usr/local/encap/package-version as if were /usr/local. The variable DESTDIR on Makefiles helps us to do it in an easy way. If the software is too simple, you’ll have to do it manually.
  3. Remember /usr/local/encap/package-version/usr/local/bin must end up as /usr/local/encap/package-version/bin.
  4. Execute epkg package to install it and epkg -r package to uninstall it.
  5. If you have problems with the libraries, try executing ldconfig.
  6. Be Careful I usually make mistakes, overwrite things and delete files, so take care with what you do and do it under your own responsibility ;-).

December 19, 2007

Modifying a live linux kernel

Filed under: Linux, Programming, System Administration — Tags: , , — jesus @ 17:22

Before reading this, I just need to say something:

I’ve no idea of linux, I’ve no idea of programming, I’ve no idea of computers… Everything you read here might have been invented, so, please, do not reproduce what I write here. If you do, bear in mind that you do it under your own responsibility. In fact, what is a computer anyway?

The other day we were having issues with a box that was used as a NFS box among other things. These issues appeared on upgrading this box from kernel 2.4 to kernel 2.6.22.1. These issues were related to locking on the NFS server, because of changing the behaviour of the flock system call on linux 2.6.11-18. From the NFS FAQ:

The NFS client in 2.6.12 provides support for flock()/BSD locks on NFS files by emulating the BSD-style locks in terms of POSIX byte range locks. Other NFS clients that use the same emulation mechanism, or that use fcntl()/POSIX locks, will then see the same locks that the Linux NFS client sees.

The problems we had are related to using NFS and Samba for exporting the same file system and locking not working properly.

SMB supports two types of locks – file-wide locks and byte-range locks.

  • File-wide locks
    • Called ‘share modes’ in SMB parlance
    • Also known as ‘BSD-style locks’
    • provided by flock() in Linux
    • provided by a ‘share mode’ flag when opening a file under Win32
    • Supported primarily by samba within samba itself by storing in a TDB – get listed under ‘Locked Files’ at the bottom of smbstatus
    • May also be enforced in the kernel using flock() if HAVE_KERNEL_SHARE_MODES is 1.
  • Byte-range locks
    • Called ‘POSIX-style’ locks.
    • provided by fcntl(fd, F_GETLK) in POSIX.
    • provided by _locking() in Win32
    • lockf() is a Linux wrapper around fcntl() for locking the whole file.
    • Supported by samba by a ‘Windows to POSIX byte-range overlap conversion layer’ and then fcntl().

Windows applications appear to use both share modes and byterange locks.

In Linux, flock() and fcntl() locks are oblivious to each other, as per http://lxr.linux.no/source/Documentation/locks.txt.

NFSv3 (as a protocol) only supports byte-range locks. However, nfsd does flock() locks on files on the server taken out by other processes – although clients cannot set them themselves. See http://nfs.sourceforge.net/#faq_d10

Unfortunately, linux 2.6.12 adds flock() emulation to the Linux NFS client by translating it into a file-wide fcntl(). This means that flock()s and fcntl()s *do collide* on remote NFS shares, which introduces all the potential application race conditions which Linux avoided by having them oblivious to each other locally. The practical upshot of this is that if you re-share an NFS share via samba, then if a Windows client (e.g. Outlook opening a PST file) opens a file with a share mode, then byte-range locking operations will fail as the lock has already been acquired. (The fact that NFS doesn’t realise the same PID has both locks and allow them both is probably an even bigger problem). The solution for this is to revert bits of the patch responsible: http://www.linux-nfs.org/Linux-2.6.x/2.6.11/linux-2.6.11-18-flock.dif. Disabling share modes in samba is not an option, as it also disables the application-layer TDB support for them – and disabling HAVE_KERNEL_SHARE_MODES will stop other programs (e.g. nfsd) on dump being aware of what’s been flock()ed.

So our solution for our server was reverting this patch on 2.6.11-18 and apply this patch:

--- fs/nfs/file.c       2007-07-10 19:56:30.000000000 +0100
+++ fs/nfs/file.c.nfs_flock_fix 2007-11-13 13:40:06.000000000 +0000
@@ -543,10 +543,24 @@
         * Not sure whether that would be unique, though, or whether
         * that would break in other places.
         */
-       if (!(fl->fl_flags & FL_FLOCK))
+
+       /**
+        * Don't simulate flock() using posix locks, as they appear to collide with
+        * legitimate posix locks from the same process.
+        */
+       if (fl->fl_flags & FL_FLOCK)
                return -ENOLCK;

        /* We're simulating flock() locks using posix locks on the server */
+       /* ...except we shouldn't get here, due to the above patch. */
        fl->fl_owner = (fl_owner_t)filp;
        fl->fl_start = 0;
        fl->fl_end = OFFSET_MAX;

So, for us, recompiling the kernel with such patch on the production server fixes all our problems. But, what if we wanted to do this live, could such a subtle change be done without rebooting? You might be thinking right now about the different options that you have on /proc about changing the behaviour of the kernel live, but what if you don’t have such option? What if we wanted to change something and there was no way to do this because it is not implemented or not possible?

Let’s see.

So, from an academic point of view we wanted to see if this could really be done. If the linux kernel would let us do that, if it was feasible. So, we set up a testing box in which we would try to modify the running kernel. How hard would be to do it on the testing box?

It seems there’s a way to do it which my colleague Matthew came up with (all credit to him, I’m just telling the story). Let’s examine the piece of code that we want to change. The offending code lives on the file fs/nfs/file.c of the linux kernel

*
 * Lock a (portion of) a file
 */
static int nfs_flock(struct file *filp, int cmd, struct file_lock *fl)
{
        dprintk("NFS: nfs_flock(f=%s/%ld, t=%x, fl=%x)n",
                        filp->f_path.dentry->d_inode->i_sb->s_id,
                        filp->f_path.dentry->d_inode->i_ino,
                        fl->fl_type, fl->fl_flags);
         /*
         * No BSD flocks over NFS allowed.
         * Note: we could try to fake a POSIX lock request here by
         * using ((u32) filp | 0x80000000) or some such as the pid.
         * Not sure whether that would be unique, though, or whether
         * that would break in other places.
         */

        if (!(fl->fl_flags & FL_FLOCK))
                return -ENOLCK;

        /* We're simulating flock() locks using posix locks on the server */
        fl->fl_owner = (fl_owner_t)filp;
        fl->fl_start = 0;
        fl->fl_end = OFFSET_MAX;

        if (fl->fl_type == F_UNLCK)
                return do_unlk(filp, cmd, fl);

        return do_setlk(filp, cmd, fl);

}

What we wanted to do is to change the behaviour of the previous function such that the if condition would be:

        if (fl->fl_flags & FL_FLOCK)
                return -ENOLCK;</pre>

That means that the change is fairly trivial and that it would result in switching an operation, basically changing the way the branching (in the machine code) is done on the if instruction. If we disassemble such object file, file.o, we get something like

# objdump -d fs/nfs/file.o
...
00000885 :
 885:   57                      push   %edi
 886:   89 d7                   mov    %edx,%edi
 888:   56                      push   %esi
 889:   89 c6                   mov    %eax,%esi
 88b:   53                      push   %ebx
 88c:   89 cb                   mov    %ecx,%ebx
 88e:   83 ec 14                sub    $0x14,%esp
 891:   f6 05 00 00 00 00 40    testb  $0x40,0x0
 898:   74 3b                   je     8d5
 89a:   0f b6 41 2c             movzbl 0x2c(%ecx),%eax
 89e:   89 44 24 10             mov    %eax,0x10(%esp)
 8a2:   0f b6 41 2d             movzbl 0x2d(%ecx),%eax
 8a6:   89 44 24 0c             mov    %eax,0xc(%esp)
 8aa:   8b 56 0c                mov    0xc(%esi),%edx
 8ad:   8b 42 0c                mov    0xc(%edx),%eax
 8b0:   8b 40 20                mov    0x20(%eax),%eax
 8b3:   89 44 24 08             mov    %eax,0x8(%esp)
 8b7:   8b 42 0c                mov    0xc(%edx),%eax
 8ba:   8b 80 9c 00 00 00       mov    0x9c(%eax),%eax
 8c0:   c7 04 24 20 01 00 00    movl   $0x120,(%esp)
 8c7:   05 40 01 00 00          add    $0x140,%eax
 8cc:   89 44 24 04             mov    %eax,0x4(%esp)
 8d0:   e8 fc ff ff ff          call   8d1
 8d5:   f6 43 2c 02             testb  $0x2,0x2c(%ebx)
 8d9:   74 47                   je     922
 8db:   80 7b 2d 02             cmpb   $0x2,0x2d(%ebx)
 8df:   89 73 14                mov    %esi,0x14(%ebx)
 8e2:   c7 43 30 00 00 00 00    movl   $0x0,0x30(%ebx)
 8e9:   c7 43 34 00 00 00 00    movl   $0x0,0x34(%ebx)
 8f0:   c7 43 38 ff ff ff ff    movl   $0xffffffff,0x38(%ebx)
 8f7:   c7 43 3c ff ff ff 7f    movl   $0x7fffffff,0x3c(%ebx)
 8fe:   75 11                   jne    911
 900:   83 c4 14                add    $0x14,%esp
 903:   89 d9                   mov    %ebx,%ecx
 905:   89 f0                   mov    %esi,%eax
...

We can actually have a look at the disassembled code of the function flock(). If you have a look at address 0x8d9, there’s an instruction that looks suspiciously similar to the test carried on the if instructions. If you know assembly and know how a compiler works, you could find out that this jump instruction (JE) is just the one we want to change, exactly to a JNE instruction (I am not going to extend here on assembly and compilers and the like. I guess that if you wanted to do this, you should already know this. And, by the way, I’ve no idea of those concepts either, seriously).

If you are not sure if that’s the right instruction, you could recompile the kernel, get the assembly out of the same file.o object and compare it to see what is what changed.

Also, if you look at address 0x898, there another JE instruction which may look like the one we are looking, but this belongs to dprintk as we have debug enabled on that kernel.

If we have a look at the instructions on the IA32 manual, we see that the opcodes for the interesting instructions are:

  • JE: 74
  • JNE: 75

Ok, so right now, we know that we want to change an JE (with opcode 74) instruction for a JNE instruction (with opcode 75) on address 0x8d9 of the object file “file.o”.

The problem now is to find out where on the kernel memory this piece of code lives. One approach that you might think of doing is grepping the whole memory for a particular sequence of instructions. This is not recommended and I will explain why later on. First, let’s see how we can have access to the kernel memory, where we could possibly modify the data…

If you have a look at your unix system, you’ll see that you have a /dev/kmem special file, with which you can access the memory from the kernel’s point of view. This is quite useful as you can access it in read mode and, more interestingly, write mode. However, doing stuff with it might be a bit dangerous, as you might have guessed. It even seems that some vendors will disable this special file.

Anyway, as I said before, you cannot and don’t want to read or grep the whole memory at /dev/kmem. It seems that the reason is having write-only registers mapped into memory, so a read would crash the system. (You might think that that message is very old, from a distant time, but believe me, it crashes linux if you do so. We tried reading the whole of /dev/mem and the network driver crashed among other little pieces, so don’t bother).

So basically the thing boils down to:

  1. Finding out exactly where we have to change the kernel (ie, at which memory address)
  2. Open /dev/kmem and changing it with the right tool
  3. Hope everything went fine :-)

Finding out where

We need to find out where on the kernel we want to change it. This can be easily do by using tools and information the kernel provides us.

First, we need to find out where the kernel function nfs_flock starts, and that can be done by having a look at the System.map file that is generated every time we compile a kernel. The System.map file is a file that helps kernel developers to debug their code by mapping kernel functions to memory addresses, so it is actually much easier to find stuff. It contains the kernel symbol table with all symbols.

Ours looks like:

...
c01a9b3f t do_unlk
c01a9b97 t do_setlk
c01a9c2b t nfs_lock
c01a9d21 t nfs_flock
c01a9dcc T nfs_get_root
c01a9f3c T nfs_write_inode
...

So now, we know that nfs_flock is located at 0xc01a9d21. We’ll use this in a minute.

We saw that we were having a look at the instruction located at 0x8D9 on the object file file.o (got by using objdump before). We also know that, on such object file, nfs_flock starts at 0x885, right?. That means that, the byte we want to change, is located exactly:

0x8D9 - 0x885 = 0x54

at nfs_flock + 0x54.

Well, as you might know, those adresses (on the object file) are relative to such file, and that, when being linked into the actual kernel, the addresses are all relocated and recalculated. So, basically the right point is on

0xc01a9d21 + 0x54

Opening /dev/kmem and modifying it

Opening /dev/kmem needs to be done with the right tools. This is basically because we need LARGE_FILE support on whichever tool we use to modify it, as the /dev/kmem special file is a representation of the kernel memory and we need access it with a tool that supports large files.

In our system, the easiest way to do it is with perl. First, we double check that perl was compiled with LARGE_FILE support:

root@devbox:~# perl -V  | grep LARGE
    cc='cc', ccflags ='-fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64',
  Compile-time options: USE_LARGE_FILES
root@devbox:~#

and, now, we can modify the kernel in a oneliner such as:

root@devbox:~# perl -e 'open (KERNEL, "+</dev/kmem") || die $!; seek(KERNEL, 0xc01a9d21 + 0x54, 0); syswrite(KERNEL, chr(0x75)); close(KERNEL);'
root@devbox:~#

where we are writing 0x75 in the right position (calculated previously).

And that’s all there is to it. If everthing went fine, the kernel behavior has been modified and nothing has crashed.

Again, only do this under a dev box, under your responsibility and if you really really know what you are doing. And only only only if you want to play with a live kernel. Remember, I know nothing about computers.

January 26, 2007

Firefox trick

Filed under: Computers, Internet — Tags: , , — jesus @ 10:53

Try this in firefox:

  • Go to any page with lots of images.
  • Delete the URL bar
  • Substitute it with:
javascript:R=0; x1=.1; y1=.05; x2=.25; y2=.24; x3=1.6; y3=.24; x4=300;y4=200; x5=300; y5=200; DI=document.images; DIL=DI.length; function A(){for(i=0; i-DIL; i++){DIS=DI[ i ].style; DIS.position='absolute'; DIS.left=(Math.sin(R*x1+i*x2+x3)*x4+x5)+"px"; DIS.top=(Math.cos(R*y1+i*y2+y3)*y4+y5)+"px"}R++}setInterval('A()',5); void(0)

Press Enter and enjoy!!!

:-)

December 13, 2006

More on setting a subversion mirror repository

Filed under: Internet, Programming — Tags: , , , — jesus @ 22:34

A few days ago I wrote about setting up a subversion repository using svnsync. On that entry I was using svn+ssh authentication, but there are some gotchas to take into account to avoid having a security issue.If you recall correctly, we had a unix user with which people will access the repository. If we haven’t taken any extra protection, that means that anyone that has their ssh key on such a user’s ~/.ssh/authorized_keys file can log into the system. If such user uses public authentication to access the remote account used for the remote repository, any committer can have access to the private key and thus access the remote machine and have write permissions on it.

To avoid this scenario we need to set things up in a way that users can commit to the main repository but cannot have access to the remote repository. And that, again, can be done by creating a different user (Remember, all this applies to the main repository box, not the remote box).
Say:

  • svn: The user used for the repository. It will let any user have write permission on the repository. It will also have all the public keys of all the committers, to allow them to access the server using the svn+ssh authentication under the generic user svn, in case it needs to be retrieved from a remote location, which can be easily used by forwarding your credentials.
  • svnsync: A user which will have access to the repository (read access is enough) and will have a couple of wrapper scripts to svnsync. It will use a ssh key to access the remote repository. The rest of the users will have no read permissions on this user’s home directory.
  • The rest of the users for the committers. They will have write access to the repository. As you may know, if you use svn+ssh authentication, whenever you do a commit, you basically are executing svnserve on the repositoy under your user ID (That’s why you need write permissions).

This way, committers won’t have access to the svnsync private key. But we must grant them access to the wrapper scripts, so whenever they do a commit, svnserve is able to execute those scripts by triggering a post-commit hook.

All these can be achieved by using sudo. So, say that we have two different wrapper scripts:

  • /home/svnsync/bin/synccommit.sh
    #!/bin/bash
    # this is to synchronize the repostory
    # $1 will be the repository
    if [[ -z $1 ]]
      then
        # The repository should be a parameter
        exit
    fi
    
    DATE=`date +%Y%m%d-%H%M `
    
    echo "[$DATE] Commiting to repository $1 . (output below if any)" >> /home/svnsync/svnsync-commit.log 2>&1
    /usr/local/bin/svnsync synchronize --username svnsync "$1" >> /home/svnsync/svnsync-commit.log 2>&1
    
  • /home/svnsync/bin/syncprop.sh
    #!/bin/bash
    # this is to synchronize the a property
    
    # $1 will be the repository
    # $2 will be the revision number
    
    if [[ -z $1 || -z $2 ]]
      then
        # The repository should be a parameter
        exit
    fi
    
    DATE=`date +%Y%m%d-%H%M `
    echo "[$DATE] Updating property (Revision $2) to repository $1 . (output below if any)" >> /home/svnsync/svnsync-properties.log 2>&1
    /usr/local/bin/svnsync copy-revprops --username svnsync "$1" "$2"  >> /home/svnsync/svnsync-properties.log 2>&1
    

We can call them by setting up the hook scripts as:

  • post-commit hook:
    # Propagate the data to the remote repository
    sudo -u svnsync /home/svnsync/bin/synccommit.sh svn+ssh://svnsync@remote/home/svnsync/svn &
    
  • post-rev-change:
    # Propagating changes to the remote repository. Putting it to the background
    sudo -u svnsync /home/svnsync/bin/syncprop.sh svn+ssh://svnsync@remote/home/svnsync/svn $REV  &amp;
    

    All we need to do now is grant access to the committers to execute both wrapper scripts by adding the following to the /etc/sudoers file (remember, by using visudo)

    %commiters  ALL= (svnsync) NOPASSWD: /home/svnsync/bin/syncprop.sh
    svn         ALL= (svnsync) NOPASSWD: /home/svnsync/bin/syncprop.sh
    
  • Where committers is a common group where all committers belong to.

    December 4, 2006

    Speeding up trac’s response time

    Filed under: Internet, Programming — Tags: , , , , — jesus @ 18:30

    I’ve been trying to speed up an installation of trac over the last few days. The web interface took ages to display each of the directories or files within the subversion repository. But this one wasn’t too big. The only change to the subversion repository is that we started using a vendor branch imported into our main repository using svm

    So, after a few hours trying different solutions, and reading trac’s source code, I think I got where the bottleneck was.Well, it was http://www.sqlite.org/download.html which was causing the bottleneck. Trac uses an object CachedRepository to access the repositories. Whenever we want to get the chagesets, a function to synchronize the repository is called:

    class CachedRepository(Repository):
      def get_changeset(self, rev):
        if not self.synced:
          self.sync()
          self.synced = 1
          return CachedChangeset(self.repos.normalize_rev(rev), self.db, self.authz)
    

    and such method, sync(), makes a call to:

    youngest_stored = self.repos.get_youngest_rev_in_cache(self.db)
    

    which is all this:

    def get_youngest_rev_in_cache(self, db):
        """Get the latest stored revision by sorting the revision strings
        numerically
        """
        cursor = db.cursor()
        cursor.execute("SELECT rev FROM revision ORDER BY -LENGTH(rev), rev DESC LIMIT 1")
        row = cursor.fetchone()
        return row and row[0] or None
    

    And that SQL query was taking around 1-2 seconds each time it was executed. It happened that we were running an old version of sqlite and pysqlite, so a ./cofigure && make && make install using the recommended installation saved my day :-)

    Hope it is useful to anybody if it gets indexed by Google.

    November 30, 2006

    Setting up a subversion mirror repository using svnsync

    Filed under: Internet, Programming — Tags: , , , — jesus @ 01:48

    With the new version of subversion 1.4 you have a new tool called svnsync with which you can maintain mirror repositories quite easily. I’ve been working on one at work and would like to share with you my findings in case it interests anyone :-)

    Understanding the problem

    In order to have a mirror repository, it is important that commits only happen on the master and then they are synchronized to the mirror using the svnsync program. Then, the mirror repository can be used for whatever you may think of but for committing: backup, high speed checkouts, web front-ends, etc.

    So, svnsync must be the only “one” able to commit to the repository. If we use the apache integration, there are various ways to do this. Let’s say we are using svn+ssh for authentication, in which case it is more complicated as ssh access usually grants writing access to the file system. So creating a different user is going to be handy.

    Creating and populating the repository

    So, let’s say that we created a user called svnsync on the target machine and that we are going to create a new subversion repository in its home directory:

    svnadmin create /home/svnsync/svn

    Now, we need to set up a hook to let svnsync change the properties. For this, we create /home/svnsync/svn/hooks/pre-revprop-change with:

    #!/bin/sh
    USER="$3"
    
    if [ "$USER" = "svnsync" ]; then exit 0; fi
    
    echo "Only the svnsync user can change revprops" >&2
    exit 1
    

    We will grant access to the user running svnsync on the main machine by copying its ssh key to .ssh/authorized_keys. And now, we only need to initialize the remote repository. Note that we can run svnsync from where ever we want, but for the sake of simplicity, we will run it on the main machine, where the original repository resides.

    $ svnsync init --username svnsync \
          svn+ssh://svnsync@remote/home/svnsync/svn \
          file:///source-svn
    

    Note:

    • The syntax is
      svnsync init DEST SOURCE

      That’s it, the destination repository goes before the source repository.

    • There is no “:” between the host name and the path to the remote repository.

    With this command, we will have initialized the destination repository and now we are ready to populate the destination repository. We can do it with this command:

    svnsync synchronize --username svnsync \
           svn+ssh://svnsync@remote/home/svnsync/svn
    

    And, as we already initialized the repository, there is no need to specify the source repository. This will take more or less time depending on how big you repository is and how fast your network connection is. Hopefully it will have finished after you take a coffee :-)

    Creating another user to let users access the repository

    So, we will now create an user called svn which will be used to access the repository using the subversion client. As we are using svn+ssh, all we need is to grant access to such user to all the users that have access to the main repository. If we are using ssh keys it’s as easy as copying all the allowed keys to the /home/svn/.ssh/authorized_keys file.
    Also, if we change the permissions to the repository at /home/svnsync/svn (and its parent) to be something like

    drwxr-xr-x  7 svnsync users 4096 2006-11-28 17:30 svn/

    we will let svnsync (and the svnsync user) be the owner and have write permissions to the repository and let svn (and all the users ssh’ing) have read access only to the repository (provided both belong to the users group).

    $ svn co svn+ssh://svn@remote/home/svnsync/svn/test
    A    test/trunk
    [...]
    A    test/trunk/readme
    Checked out revision 2.
    $ echo "test" >> test/trunk/readme
    $ cd test/
    $ svn -m "test commit" ci
    Sending        trunk/readme
    Transmitting file data .svn: Commit failed (details follow):
    svn: Can't create directory '/home/svnsync/svn/db/transactions/2-1.txn':
    Permission denied
    

    And that’s all.

    Commiting to the master respository

    In case you want to commit back to the master respository, you need to do a “svn switch –relocate” to point to the master repository, but for that to work, it needs to have the same UUID if we don’t want it to fail.

    1. To get the UUID on the main machine:
      svnadmin dump -r0 /source-svn | head -n 3 > saved-uuid
      
    2. Copy the file saved-uuid to the remote machine and do a
      svnadmin load --force-uuid /home/svnsynd/svn < saved-uuid
      

    So, thins to take into account:

    1. When populating the repository, we use the svnsync user who has write permissions to the repository (svn+ssh://svnsync@…)
    2. When checking out, we use the svn user (svn+ssh://svn@…)
    3. Automatization

      If we want to keep both repositories synchronized, we need to set up a couple of hooks on the source repository.

      post-commit

      Add this to such hook (it needs to be executable and not have the .tmpl extension)

      # Propagate the data to the remote repository
      /usr/local/bin/svnsync synchronize --username svnsync
              svn+ssh://svnsync@remote/home/svnsync/svn &
      

      post-rev-changes

      We also want the properties changes to be present at the remote repository:

      # Propagating changes to the remote repository.
      /usr/local/bin/svnsync copy-revprops --username svnsync
             svn+ssh://svnsync@remote/home/svnsync/svn $REV  &
      

      Note that we put them to the background to let the user go on with what he was doing.

      Final notes

      This is a quick guide on how to set things to have a remote repository. There is much more than this and I recommend you to read the documentation and, obviously, do a backup. Doing a “svnadmin dump” only takes a while a it´s really worth it.

      In any case, just let me know if you find any errors or typos.

    October 3, 2006

    Mini tutorial on gdb

    Filed under: Programming — Tags: , , , — jesus @ 02:43

    Thanks to Matthew:

    gdb --args program/to/debug --whatever
    break Foo.c:532
    # to put a breakpoint at line 532 of Foo.c
    run # to run it, until it hits that line
    bt # to display the stacktrace (backtrace)
    frame 2 # to change the stack frame you're looking at
    list # to list the sourcecode for that stack frame
    step # to step into the next command for that frame
    next # to run the code to the next command
    continue # to continue onwards
    print bar # to print the contents of variable bar
    printf  "%08x", bar # standard printf for bar
    delete 2 # to remove the 2nd breakpoint
    

    Obviously, you need to compile things with -g as an argument to GCC.

    The Shocking Blue Green Theme. Blog at WordPress.com.

    Follow

    Get every new post delivered to your Inbox.

    Join 2,875 other followers